Test Report: Hyper-V_Windows 19389

                    
                      4e9c16444aca391b349fd87cc48c80a0a38d518e:2024-08-07:35690
                    
                

Test fail (38/197)

Order failed test Duration
42 TestAddons/parallel/Registry 79.56
65 TestErrorSpam/setup 200.29
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 35
90 TestFunctional/serial/ExtraConfig 345.73
91 TestFunctional/serial/ComponentHealth 180.41
94 TestFunctional/serial/InvalidService 4.27
96 TestFunctional/parallel/ConfigCmd 1.8
100 TestFunctional/parallel/StatusCmd 286.3
104 TestFunctional/parallel/ServiceCmdConnect 273.11
106 TestFunctional/parallel/PersistentVolumeClaim 582.16
110 TestFunctional/parallel/MySQL 296.61
116 TestFunctional/parallel/NodeLabels 179.5
121 TestFunctional/parallel/DockerEnv/powershell 477.48
128 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 8.09
131 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 4.24
138 TestFunctional/parallel/ServiceCmd/DeployApp 2.16
139 TestFunctional/parallel/ServiceCmd/List 7.41
140 TestFunctional/parallel/ServiceCmd/JSONOutput 7.45
141 TestFunctional/parallel/ServiceCmd/HTTPS 7.58
142 TestFunctional/parallel/ServiceCmd/Format 7.66
143 TestFunctional/parallel/ServiceCmd/URL 7.67
144 TestFunctional/parallel/ImageCommands/ImageListShort 60.03
145 TestFunctional/parallel/ImageCommands/ImageListTable 59.43
146 TestFunctional/parallel/ImageCommands/ImageListJson 45.94
147 TestFunctional/parallel/ImageCommands/ImageListYaml 60.07
148 TestFunctional/parallel/ImageCommands/ImageBuild 119.46
150 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 104.22
151 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 120.47
152 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 120.45
155 TestFunctional/parallel/ImageCommands/ImageSaveToFile 60.34
157 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.45
167 TestMultiControlPlane/serial/PingHostFromPods 71.18
171 TestMultiControlPlane/serial/CopyFile 694.8
221 TestMountStart/serial/RestartStopped 192.79
226 TestMultiNode/serial/PingHostFrom2Pods 58.72
233 TestMultiNode/serial/RestartKeepsNodes 396.36
249 TestNoKubernetes/serial/StartWithK8s 299.9
260 TestPause/serial/Start 10800.478
x
+
TestAddons/parallel/Registry (79.56s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 7.8752ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-698f998955-vvp9k" [fbcec4dd-c203-404c-9799-d26bb331baa3] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.014851s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-f6jnh" [61198126-34a4-4d07-a59f-ced9ba3baa61] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.01831s
addons_test.go:342: (dbg) Run:  kubectl --context addons-463600 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-463600 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-463600 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (8.9268615s)
addons_test.go:361: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-463600 ip
addons_test.go:361: (dbg) Done: out/minikube-windows-amd64.exe -p addons-463600 ip: (2.6359661s)
addons_test.go:366: expected stderr to be -empty- but got: *"W0807 17:40:50.968353    9400 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n"* .  args "out/minikube-windows-amd64.exe -p addons-463600 ip"
2024/08/07 17:40:53 [DEBUG] GET http://172.28.235.128:5000
addons_test.go:390: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-463600 addons disable registry --alsologtostderr -v=1
addons_test.go:390: (dbg) Done: out/minikube-windows-amd64.exe -p addons-463600 addons disable registry --alsologtostderr -v=1: (17.2865682s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-463600 -n addons-463600
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-463600 -n addons-463600: (14.0118444s)
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-463600 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p addons-463600 logs -n 25: (10.6966304s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube             | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:30 UTC | 07 Aug 24 17:30 UTC |
	| delete  | -p download-only-481900                                                                     | download-only-481900 | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:30 UTC | 07 Aug 24 17:30 UTC |
	| delete  | -p download-only-154100                                                                     | download-only-154100 | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:30 UTC | 07 Aug 24 17:30 UTC |
	| delete  | -p download-only-734300                                                                     | download-only-734300 | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:30 UTC | 07 Aug 24 17:30 UTC |
	| delete  | -p download-only-481900                                                                     | download-only-481900 | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:30 UTC | 07 Aug 24 17:30 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-249700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:30 UTC |                     |
	|         | binary-mirror-249700                                                                        |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |                   |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |                   |         |                     |                     |
	|         | http://127.0.0.1:65495                                                                      |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| delete  | -p binary-mirror-249700                                                                     | binary-mirror-249700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:30 UTC | 07 Aug 24 17:30 UTC |
	| addons  | disable dashboard -p                                                                        | addons-463600        | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:30 UTC |                     |
	|         | addons-463600                                                                               |                      |                   |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-463600        | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:30 UTC |                     |
	|         | addons-463600                                                                               |                      |                   |         |                     |                     |
	| start   | -p addons-463600 --wait=true                                                                | addons-463600        | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:30 UTC | 07 Aug 24 17:38 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |                   |         |                     |                     |
	|         | --addons=registry                                                                           |                      |                   |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |                   |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |                   |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |                   |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |                   |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |                   |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |                   |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |                   |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |                   |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |                   |         |                     |                     |
	|         | --driver=hyperv --addons=ingress                                                            |                      |                   |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |                   |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |                   |         |                     |                     |
	| addons  | addons-463600 addons disable                                                                | addons-463600        | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:39 UTC | 07 Aug 24 17:39 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |                   |         |                     |                     |
	| addons  | addons-463600 addons disable                                                                | addons-463600        | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:39 UTC | 07 Aug 24 17:40 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |                   |         |                     |                     |
	|         | -v=1                                                                                        |                      |                   |         |                     |                     |
	| addons  | addons-463600 addons                                                                        | addons-463600        | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:40 UTC | 07 Aug 24 17:40 UTC |
	|         | disable metrics-server                                                                      |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
	| addons  | addons-463600 addons disable                                                                | addons-463600        | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:40 UTC | 07 Aug 24 17:40 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |                   |         |                     |                     |
	|         | -v=1                                                                                        |                      |                   |         |                     |                     |
	| ssh     | addons-463600 ssh cat                                                                       | addons-463600        | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:40 UTC | 07 Aug 24 17:40 UTC |
	|         | /opt/local-path-provisioner/pvc-5271f59d-3a51-427c-baa4-cc93a8edce35_default_test-pvc/file1 |                      |                   |         |                     |                     |
	| addons  | addons-463600 addons disable                                                                | addons-463600        | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:40 UTC | 07 Aug 24 17:40 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-463600        | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:40 UTC | 07 Aug 24 17:41 UTC |
	|         | addons-463600                                                                               |                      |                   |         |                     |                     |
	| ip      | addons-463600 ip                                                                            | addons-463600        | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:40 UTC | 07 Aug 24 17:40 UTC |
	| addons  | addons-463600 addons disable                                                                | addons-463600        | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:40 UTC | 07 Aug 24 17:41 UTC |
	|         | registry --alsologtostderr                                                                  |                      |                   |         |                     |                     |
	|         | -v=1                                                                                        |                      |                   |         |                     |                     |
	| ssh     | addons-463600 ssh curl -s                                                                   | addons-463600        | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:40 UTC | 07 Aug 24 17:41 UTC |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |                   |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |                   |         |                     |                     |
	| addons  | addons-463600 addons                                                                        | addons-463600        | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:40 UTC | 07 Aug 24 17:41 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
	| ip      | addons-463600 ip                                                                            | addons-463600        | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:41 UTC | 07 Aug 24 17:41 UTC |
	| addons  | addons-463600 addons disable                                                                | addons-463600        | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:41 UTC |                     |
	|         | ingress-dns --alsologtostderr                                                               |                      |                   |         |                     |                     |
	|         | -v=1                                                                                        |                      |                   |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-463600        | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:41 UTC |                     |
	|         | addons-463600                                                                               |                      |                   |         |                     |                     |
	| addons  | addons-463600 addons                                                                        | addons-463600        | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:41 UTC |                     |
	|         | disable volumesnapshots                                                                     |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/07 17:30:51
	Running on machine: minikube6
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0807 17:30:51.325992    2316 out.go:291] Setting OutFile to fd 624 ...
	I0807 17:30:51.327008    2316 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 17:30:51.327008    2316 out.go:304] Setting ErrFile to fd 672...
	I0807 17:30:51.327759    2316 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 17:30:51.349711    2316 out.go:298] Setting JSON to false
	I0807 17:30:51.352840    2316 start.go:129] hostinfo: {"hostname":"minikube6","uptime":313780,"bootTime":1722738070,"procs":189,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4717 Build 19045.4717","kernelVersion":"10.0.19045.4717 Build 19045.4717","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0807 17:30:51.353481    2316 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0807 17:30:51.360705    2316 out.go:177] * [addons-463600] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4717 Build 19045.4717
	I0807 17:30:51.364719    2316 notify.go:220] Checking for updates...
	I0807 17:30:51.367360    2316 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0807 17:30:51.370072    2316 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0807 17:30:51.372772    2316 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0807 17:30:51.377296    2316 out.go:177]   - MINIKUBE_LOCATION=19389
	I0807 17:30:51.379855    2316 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0807 17:30:51.383677    2316 driver.go:392] Setting default libvirt URI to qemu:///system
	I0807 17:30:56.896732    2316 out.go:177] * Using the hyperv driver based on user configuration
	I0807 17:30:56.900641    2316 start.go:297] selected driver: hyperv
	I0807 17:30:56.900641    2316 start.go:901] validating driver "hyperv" against <nil>
	I0807 17:30:56.900641    2316 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0807 17:30:56.953483    2316 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0807 17:30:56.955275    2316 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0807 17:30:56.955275    2316 cni.go:84] Creating CNI manager for ""
	I0807 17:30:56.955394    2316 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0807 17:30:56.955452    2316 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0807 17:30:56.955452    2316 start.go:340] cluster config:
	{Name:addons-463600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-463600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 17:30:56.955452    2316 iso.go:125] acquiring lock: {Name:mk51465eaa337f49a286b30986b5f3d5f63e6787 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 17:30:56.961181    2316 out.go:177] * Starting "addons-463600" primary control-plane node in "addons-463600" cluster
	I0807 17:30:56.963539    2316 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0807 17:30:56.963539    2316 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0807 17:30:56.963539    2316 cache.go:56] Caching tarball of preloaded images
	I0807 17:30:56.964145    2316 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0807 17:30:56.964464    2316 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0807 17:30:56.965141    2316 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\config.json ...
	I0807 17:30:56.965458    2316 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\config.json: {Name:mk257e8047ead8b46876cfe69d793b3d58bf00d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 17:30:56.966325    2316 start.go:360] acquireMachinesLock for addons-463600: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 17:30:56.967510    2316 start.go:364] duration metric: took 645.5µs to acquireMachinesLock for "addons-463600"
	I0807 17:30:56.967587    2316 start.go:93] Provisioning new machine with config: &{Name:addons-463600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:addons-463600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0807 17:30:56.967587    2316 start.go:125] createHost starting for "" (driver="hyperv")
	I0807 17:30:56.970997    2316 out.go:204] * Creating hyperv VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0807 17:30:56.971155    2316 start.go:159] libmachine.API.Create for "addons-463600" (driver="hyperv")
	I0807 17:30:56.971155    2316 client.go:168] LocalClient.Create starting
	I0807 17:30:56.971826    2316 main.go:141] libmachine: Creating CA: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0807 17:30:57.111763    2316 main.go:141] libmachine: Creating client certificate: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0807 17:30:57.325709    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0807 17:30:59.541048    2316 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0807 17:30:59.541133    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:30:59.541284    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0807 17:31:01.261806    2316 main.go:141] libmachine: [stdout =====>] : False
	
	I0807 17:31:01.261898    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:31:01.262022    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0807 17:31:02.729267    2316 main.go:141] libmachine: [stdout =====>] : True
	
	I0807 17:31:02.729267    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:31:02.729563    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0807 17:31:06.560777    2316 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0807 17:31:06.560777    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:31:06.563996    2316 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0807 17:31:07.022645    2316 main.go:141] libmachine: Creating SSH key...
	I0807 17:31:07.274580    2316 main.go:141] libmachine: Creating VM...
	I0807 17:31:07.274580    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0807 17:31:10.192411    2316 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0807 17:31:10.192411    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:31:10.192633    2316 main.go:141] libmachine: Using switch "Default Switch"
	I0807 17:31:10.192838    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0807 17:31:11.911606    2316 main.go:141] libmachine: [stdout =====>] : True
	
	I0807 17:31:11.912022    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:31:11.912022    2316 main.go:141] libmachine: Creating VHD
	I0807 17:31:11.912203    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-463600\fixed.vhd' -SizeBytes 10MB -Fixed
	I0807 17:31:15.786213    2316 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-463600\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 3D9F4DB8-E666-44A2-8C5D-86F8DC18F431
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0807 17:31:15.786213    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:31:15.786304    2316 main.go:141] libmachine: Writing magic tar header
	I0807 17:31:15.786446    2316 main.go:141] libmachine: Writing SSH key tar header
	I0807 17:31:15.797641    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-463600\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-463600\disk.vhd' -VHDType Dynamic -DeleteSource
	I0807 17:31:19.059658    2316 main.go:141] libmachine: [stdout =====>] : 
	I0807 17:31:19.059658    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:31:19.060426    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-463600\disk.vhd' -SizeBytes 20000MB
	I0807 17:31:21.716780    2316 main.go:141] libmachine: [stdout =====>] : 
	I0807 17:31:21.716780    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:31:21.716780    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM addons-463600 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-463600' -SwitchName 'Default Switch' -MemoryStartupBytes 4000MB
	I0807 17:31:26.091328    2316 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	addons-463600 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0807 17:31:26.091328    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:31:26.091979    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName addons-463600 -DynamicMemoryEnabled $false
	I0807 17:31:28.397790    2316 main.go:141] libmachine: [stdout =====>] : 
	I0807 17:31:28.398064    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:31:28.398064    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor addons-463600 -Count 2
	I0807 17:31:30.631902    2316 main.go:141] libmachine: [stdout =====>] : 
	I0807 17:31:30.631902    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:31:30.632082    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName addons-463600 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-463600\boot2docker.iso'
	I0807 17:31:33.326349    2316 main.go:141] libmachine: [stdout =====>] : 
	I0807 17:31:33.326349    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:31:33.327083    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName addons-463600 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-463600\disk.vhd'
	I0807 17:31:36.062882    2316 main.go:141] libmachine: [stdout =====>] : 
	I0807 17:31:36.062882    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:31:36.062882    2316 main.go:141] libmachine: Starting VM...
	I0807 17:31:36.063818    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM addons-463600
	I0807 17:31:39.339526    2316 main.go:141] libmachine: [stdout =====>] : 
	I0807 17:31:39.339526    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:31:39.339526    2316 main.go:141] libmachine: Waiting for host to start...
	I0807 17:31:39.339526    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-463600 ).state
	I0807 17:31:41.711042    2316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:31:41.711042    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:31:41.711839    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-463600 ).networkadapters[0]).ipaddresses[0]
	I0807 17:31:44.241922    2316 main.go:141] libmachine: [stdout =====>] : 
	I0807 17:31:44.243075    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:31:45.250229    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-463600 ).state
	I0807 17:31:47.481340    2316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:31:47.481340    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:31:47.481340    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-463600 ).networkadapters[0]).ipaddresses[0]
	I0807 17:31:50.077006    2316 main.go:141] libmachine: [stdout =====>] : 
	I0807 17:31:50.077568    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:31:51.085937    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-463600 ).state
	I0807 17:31:53.345219    2316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:31:53.346089    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:31:53.346141    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-463600 ).networkadapters[0]).ipaddresses[0]
	I0807 17:31:55.970431    2316 main.go:141] libmachine: [stdout =====>] : 
	I0807 17:31:55.970431    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:31:56.981284    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-463600 ).state
	I0807 17:31:59.295469    2316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:31:59.295469    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:31:59.295548    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-463600 ).networkadapters[0]).ipaddresses[0]
	I0807 17:32:02.014850    2316 main.go:141] libmachine: [stdout =====>] : 
	I0807 17:32:02.014850    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:32:03.022892    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-463600 ).state
	I0807 17:32:05.366580    2316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:32:05.366580    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:32:05.367572    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-463600 ).networkadapters[0]).ipaddresses[0]
	I0807 17:32:07.970215    2316 main.go:141] libmachine: [stdout =====>] : 172.28.235.128
	
	I0807 17:32:07.970277    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:32:07.970277    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-463600 ).state
	I0807 17:32:10.140075    2316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:32:10.140075    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:32:10.141023    2316 machine.go:94] provisionDockerMachine start ...
	I0807 17:32:10.141536    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-463600 ).state
	I0807 17:32:12.358283    2316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:32:12.358283    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:32:12.358865    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-463600 ).networkadapters[0]).ipaddresses[0]
	I0807 17:32:14.957657    2316 main.go:141] libmachine: [stdout =====>] : 172.28.235.128
	
	I0807 17:32:14.957732    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:32:14.962518    2316 main.go:141] libmachine: Using SSH client type: native
	I0807 17:32:14.975188    2316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.235.128 22 <nil> <nil>}
	I0807 17:32:14.975188    2316 main.go:141] libmachine: About to run SSH command:
	hostname
	I0807 17:32:15.110950    2316 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0807 17:32:15.110950    2316 buildroot.go:166] provisioning hostname "addons-463600"
	I0807 17:32:15.110950    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-463600 ).state
	I0807 17:32:17.269491    2316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:32:17.269656    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:32:17.269861    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-463600 ).networkadapters[0]).ipaddresses[0]
	I0807 17:32:19.863348    2316 main.go:141] libmachine: [stdout =====>] : 172.28.235.128
	
	I0807 17:32:19.863706    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:32:19.869184    2316 main.go:141] libmachine: Using SSH client type: native
	I0807 17:32:19.869844    2316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.235.128 22 <nil> <nil>}
	I0807 17:32:19.869844    2316 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-463600 && echo "addons-463600" | sudo tee /etc/hostname
	I0807 17:32:20.023419    2316 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-463600
	
	I0807 17:32:20.023596    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-463600 ).state
	I0807 17:32:22.215704    2316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:32:22.215704    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:32:22.216517    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-463600 ).networkadapters[0]).ipaddresses[0]
	I0807 17:32:24.845245    2316 main.go:141] libmachine: [stdout =====>] : 172.28.235.128
	
	I0807 17:32:24.845508    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:32:24.850748    2316 main.go:141] libmachine: Using SSH client type: native
	I0807 17:32:24.851527    2316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.235.128 22 <nil> <nil>}
	I0807 17:32:24.851527    2316 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-463600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-463600/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-463600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0807 17:32:25.004858    2316 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0807 17:32:25.004858    2316 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0807 17:32:25.004858    2316 buildroot.go:174] setting up certificates
	I0807 17:32:25.004858    2316 provision.go:84] configureAuth start
	I0807 17:32:25.004858    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-463600 ).state
	I0807 17:32:27.235134    2316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:32:27.235785    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:32:27.235911    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-463600 ).networkadapters[0]).ipaddresses[0]
	I0807 17:32:29.820338    2316 main.go:141] libmachine: [stdout =====>] : 172.28.235.128
	
	I0807 17:32:29.820338    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:32:29.820338    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-463600 ).state
	I0807 17:32:32.056435    2316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:32:32.056435    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:32:32.056574    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-463600 ).networkadapters[0]).ipaddresses[0]
	I0807 17:32:34.661842    2316 main.go:141] libmachine: [stdout =====>] : 172.28.235.128
	
	I0807 17:32:34.662025    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:32:34.662025    2316 provision.go:143] copyHostCerts
	I0807 17:32:34.662808    2316 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0807 17:32:34.664417    2316 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0807 17:32:34.665842    2316 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0807 17:32:34.667178    2316 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.addons-463600 san=[127.0.0.1 172.28.235.128 addons-463600 localhost minikube]
	I0807 17:32:34.853823    2316 provision.go:177] copyRemoteCerts
	I0807 17:32:34.869702    2316 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0807 17:32:34.869864    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-463600 ).state
	I0807 17:32:37.062743    2316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:32:37.062743    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:32:37.062743    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-463600 ).networkadapters[0]).ipaddresses[0]
	I0807 17:32:39.645852    2316 main.go:141] libmachine: [stdout =====>] : 172.28.235.128
	
	I0807 17:32:39.645852    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:32:39.646207    2316 sshutil.go:53] new ssh client: &{IP:172.28.235.128 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-463600\id_rsa Username:docker}
	I0807 17:32:39.757091    2316 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8870703s)
	I0807 17:32:39.757393    2316 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0807 17:32:39.805657    2316 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0807 17:32:39.850837    2316 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0807 17:32:39.900359    2316 provision.go:87] duration metric: took 14.8953082s to configureAuth
	I0807 17:32:39.900359    2316 buildroot.go:189] setting minikube options for container-runtime
	I0807 17:32:39.901342    2316 config.go:182] Loaded profile config "addons-463600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 17:32:39.901342    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-463600 ).state
	I0807 17:32:42.086056    2316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:32:42.086056    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:32:42.086056    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-463600 ).networkadapters[0]).ipaddresses[0]
	I0807 17:32:44.665358    2316 main.go:141] libmachine: [stdout =====>] : 172.28.235.128
	
	I0807 17:32:44.666058    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:32:44.672115    2316 main.go:141] libmachine: Using SSH client type: native
	I0807 17:32:44.672728    2316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.235.128 22 <nil> <nil>}
	I0807 17:32:44.672852    2316 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0807 17:32:44.796449    2316 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0807 17:32:44.796449    2316 buildroot.go:70] root file system type: tmpfs
	I0807 17:32:44.796828    2316 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0807 17:32:44.796828    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-463600 ).state
	I0807 17:32:46.988979    2316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:32:46.988979    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:32:46.989761    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-463600 ).networkadapters[0]).ipaddresses[0]
	I0807 17:32:49.564444    2316 main.go:141] libmachine: [stdout =====>] : 172.28.235.128
	
	I0807 17:32:49.564444    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:32:49.571937    2316 main.go:141] libmachine: Using SSH client type: native
	I0807 17:32:49.572500    2316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.235.128 22 <nil> <nil>}
	I0807 17:32:49.572500    2316 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0807 17:32:49.722246    2316 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0807 17:32:49.722374    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-463600 ).state
	I0807 17:32:51.904772    2316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:32:51.904772    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:32:51.905335    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-463600 ).networkadapters[0]).ipaddresses[0]
	I0807 17:32:54.490347    2316 main.go:141] libmachine: [stdout =====>] : 172.28.235.128
	
	I0807 17:32:54.490347    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:32:54.497936    2316 main.go:141] libmachine: Using SSH client type: native
	I0807 17:32:54.497936    2316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.235.128 22 <nil> <nil>}
	I0807 17:32:54.497936    2316 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0807 17:32:56.708557    2316 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0807 17:32:56.708557    2316 machine.go:97] duration metric: took 46.5667543s to provisionDockerMachine
	I0807 17:32:56.708557    2316 client.go:171] duration metric: took 1m59.735845s to LocalClient.Create
	I0807 17:32:56.708557    2316 start.go:167] duration metric: took 1m59.735845s to libmachine.API.Create "addons-463600"
	I0807 17:32:56.708557    2316 start.go:293] postStartSetup for "addons-463600" (driver="hyperv")
	I0807 17:32:56.708557    2316 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0807 17:32:56.722723    2316 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0807 17:32:56.722723    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-463600 ).state
	I0807 17:32:58.902098    2316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:32:58.903172    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:32:58.903252    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-463600 ).networkadapters[0]).ipaddresses[0]
	I0807 17:33:01.507041    2316 main.go:141] libmachine: [stdout =====>] : 172.28.235.128
	
	I0807 17:33:01.507342    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:33:01.507652    2316 sshutil.go:53] new ssh client: &{IP:172.28.235.128 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-463600\id_rsa Username:docker}
	I0807 17:33:01.619488    2316 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8957006s)
	I0807 17:33:01.630657    2316 ssh_runner.go:195] Run: cat /etc/os-release
	I0807 17:33:01.636743    2316 info.go:137] Remote host: Buildroot 2023.02.9
	I0807 17:33:01.636901    2316 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0807 17:33:01.637914    2316 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0807 17:33:01.638398    2316 start.go:296] duration metric: took 4.9297768s for postStartSetup
	I0807 17:33:01.642456    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-463600 ).state
	I0807 17:33:03.805727    2316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:33:03.806720    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:33:03.806790    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-463600 ).networkadapters[0]).ipaddresses[0]
	I0807 17:33:06.374939    2316 main.go:141] libmachine: [stdout =====>] : 172.28.235.128
	
	I0807 17:33:06.374939    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:33:06.375362    2316 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\config.json ...
	I0807 17:33:06.378579    2316 start.go:128] duration metric: took 2m9.4093103s to createHost
	I0807 17:33:06.378579    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-463600 ).state
	I0807 17:33:08.579400    2316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:33:08.580178    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:33:08.580296    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-463600 ).networkadapters[0]).ipaddresses[0]
	I0807 17:33:11.187348    2316 main.go:141] libmachine: [stdout =====>] : 172.28.235.128
	
	I0807 17:33:11.187413    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:33:11.193476    2316 main.go:141] libmachine: Using SSH client type: native
	I0807 17:33:11.193476    2316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.235.128 22 <nil> <nil>}
	I0807 17:33:11.193476    2316 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0807 17:33:11.327354    2316 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723051991.347156261
	
	I0807 17:33:11.327354    2316 fix.go:216] guest clock: 1723051991.347156261
	I0807 17:33:11.327354    2316 fix.go:229] Guest: 2024-08-07 17:33:11.347156261 +0000 UTC Remote: 2024-08-07 17:33:06.3785797 +0000 UTC m=+135.216132501 (delta=4.968576561s)
	I0807 17:33:11.327354    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-463600 ).state
	I0807 17:33:13.523232    2316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:33:13.523232    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:33:13.523333    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-463600 ).networkadapters[0]).ipaddresses[0]
	I0807 17:33:16.237076    2316 main.go:141] libmachine: [stdout =====>] : 172.28.235.128
	
	I0807 17:33:16.237076    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:33:16.245338    2316 main.go:141] libmachine: Using SSH client type: native
	I0807 17:33:16.245338    2316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.235.128 22 <nil> <nil>}
	I0807 17:33:16.245338    2316 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1723051991
	I0807 17:33:16.381418    2316 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Aug  7 17:33:11 UTC 2024
	
	I0807 17:33:16.381418    2316 fix.go:236] clock set: Wed Aug  7 17:33:11 UTC 2024
	 (err=<nil>)
	I0807 17:33:16.381418    2316 start.go:83] releasing machines lock for "addons-463600", held for 2m19.4120198s
	I0807 17:33:16.381418    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-463600 ).state
	I0807 17:33:18.612320    2316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:33:18.612320    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:33:18.612520    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-463600 ).networkadapters[0]).ipaddresses[0]
	I0807 17:33:21.277834    2316 main.go:141] libmachine: [stdout =====>] : 172.28.235.128
	
	I0807 17:33:21.277866    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:33:21.282033    2316 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0807 17:33:21.282178    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-463600 ).state
	I0807 17:33:21.292887    2316 ssh_runner.go:195] Run: cat /version.json
	I0807 17:33:21.292887    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-463600 ).state
	I0807 17:33:23.579633    2316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:33:23.579713    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:33:23.579792    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-463600 ).networkadapters[0]).ipaddresses[0]
	I0807 17:33:23.579990    2316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:33:23.580074    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:33:23.580170    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-463600 ).networkadapters[0]).ipaddresses[0]
	I0807 17:33:26.362927    2316 main.go:141] libmachine: [stdout =====>] : 172.28.235.128
	
	I0807 17:33:26.362927    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:33:26.363666    2316 sshutil.go:53] new ssh client: &{IP:172.28.235.128 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-463600\id_rsa Username:docker}
	I0807 17:33:26.387843    2316 main.go:141] libmachine: [stdout =====>] : 172.28.235.128
	
	I0807 17:33:26.388634    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:33:26.389326    2316 sshutil.go:53] new ssh client: &{IP:172.28.235.128 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-463600\id_rsa Username:docker}
	I0807 17:33:26.457092    2316 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.1749923s)
	W0807 17:33:26.457092    2316 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0807 17:33:26.489613    2316 ssh_runner.go:235] Completed: cat /version.json: (5.1965736s)
	I0807 17:33:26.504199    2316 ssh_runner.go:195] Run: systemctl --version
	I0807 17:33:26.527087    2316 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0807 17:33:26.535695    2316 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0807 17:33:26.548283    2316 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0807 17:33:26.581092    2316 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0807 17:33:26.581262    2316 start.go:495] detecting cgroup driver to use...
	I0807 17:33:26.581663    2316 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0807 17:33:26.629692    2316 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0807 17:33:26.660198    2316 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0807 17:33:26.679603    2316 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0807 17:33:26.692133    2316 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0807 17:33:26.723156    2316 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	W0807 17:33:26.734817    2316 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0807 17:33:26.734817    2316 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0807 17:33:26.762584    2316 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0807 17:33:26.795612    2316 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0807 17:33:26.837432    2316 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0807 17:33:26.869870    2316 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0807 17:33:26.903216    2316 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0807 17:33:26.935242    2316 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0807 17:33:26.971055    2316 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0807 17:33:27.000723    2316 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0807 17:33:27.031581    2316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 17:33:27.238535    2316 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0807 17:33:27.279450    2316 start.go:495] detecting cgroup driver to use...
	I0807 17:33:27.298088    2316 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0807 17:33:27.341929    2316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0807 17:33:27.381392    2316 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0807 17:33:27.427900    2316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0807 17:33:27.466537    2316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0807 17:33:27.502768    2316 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0807 17:33:27.565237    2316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0807 17:33:27.588005    2316 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0807 17:33:27.632535    2316 ssh_runner.go:195] Run: which cri-dockerd
	I0807 17:33:27.650894    2316 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0807 17:33:27.668740    2316 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0807 17:33:27.714758    2316 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0807 17:33:27.917981    2316 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0807 17:33:28.127486    2316 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0807 17:33:28.127600    2316 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0807 17:33:28.172121    2316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 17:33:28.363044    2316 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0807 17:33:30.951858    2316 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5887808s)
	I0807 17:33:30.963848    2316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0807 17:33:31.001880    2316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0807 17:33:31.036491    2316 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0807 17:33:31.238965    2316 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0807 17:33:31.449275    2316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 17:33:31.648256    2316 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0807 17:33:31.689046    2316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0807 17:33:31.720751    2316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 17:33:31.914501    2316 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0807 17:33:32.019827    2316 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0807 17:33:32.031840    2316 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0807 17:33:32.043180    2316 start.go:563] Will wait 60s for crictl version
	I0807 17:33:32.055818    2316 ssh_runner.go:195] Run: which crictl
	I0807 17:33:32.079427    2316 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0807 17:33:32.135128    2316 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.1
	RuntimeApiVersion:  v1
	I0807 17:33:32.143166    2316 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0807 17:33:32.187280    2316 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0807 17:33:32.219460    2316 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...
	I0807 17:33:32.219460    2316 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0807 17:33:32.224296    2316 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0807 17:33:32.224296    2316 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0807 17:33:32.224296    2316 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0807 17:33:32.224296    2316 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:f6:3a:6a Flags:up|broadcast|multicast|running}
	I0807 17:33:32.226292    2316 ip.go:210] interface addr: fe80::e7eb:b592:d388:ff99/64
	I0807 17:33:32.227291    2316 ip.go:210] interface addr: 172.28.224.1/20
	I0807 17:33:32.238341    2316 ssh_runner.go:195] Run: grep 172.28.224.1	host.minikube.internal$ /etc/hosts
	I0807 17:33:32.245229    2316 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.28.224.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0807 17:33:32.268484    2316 kubeadm.go:883] updating cluster {Name:addons-463600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.3 ClusterName:addons-463600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.235.128 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0807 17:33:32.268484    2316 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0807 17:33:32.278644    2316 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0807 17:33:32.302065    2316 docker.go:685] Got preloaded images: 
	I0807 17:33:32.302103    2316 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.3 wasn't preloaded
	I0807 17:33:32.312866    2316 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0807 17:33:32.340446    2316 ssh_runner.go:195] Run: which lz4
	I0807 17:33:32.358176    2316 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0807 17:33:32.364586    2316 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0807 17:33:32.364789    2316 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359612007 bytes)
	I0807 17:33:34.546159    2316 docker.go:649] duration metric: took 2.1998176s to copy over tarball
	I0807 17:33:34.558646    2316 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0807 17:33:39.909091    2316 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (5.3498912s)
	I0807 17:33:39.909153    2316 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0807 17:33:39.972772    2316 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0807 17:33:39.992792    2316 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0807 17:33:40.035342    2316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 17:33:40.243012    2316 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0807 17:33:46.030520    2316 ssh_runner.go:235] Completed: sudo systemctl restart docker: (5.7874337s)
	I0807 17:33:46.041023    2316 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0807 17:33:46.072289    2316 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0807 17:33:46.072358    2316 cache_images.go:84] Images are preloaded, skipping loading
	I0807 17:33:46.072465    2316 kubeadm.go:934] updating node { 172.28.235.128 8443 v1.30.3 docker true true} ...
	I0807 17:33:46.072684    2316 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-463600 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.28.235.128
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:addons-463600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0807 17:33:46.082703    2316 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0807 17:33:46.155514    2316 cni.go:84] Creating CNI manager for ""
	I0807 17:33:46.155669    2316 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0807 17:33:46.155726    2316 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0807 17:33:46.155825    2316 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.28.235.128 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-463600 NodeName:addons-463600 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.28.235.128"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.28.235.128 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0807 17:33:46.156325    2316 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.28.235.128
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-463600"
	  kubeletExtraArgs:
	    node-ip: 172.28.235.128
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.28.235.128"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0807 17:33:46.171087    2316 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0807 17:33:46.188290    2316 binaries.go:44] Found k8s binaries, skipping transfer
	I0807 17:33:46.203749    2316 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0807 17:33:46.220683    2316 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0807 17:33:46.253005    2316 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0807 17:33:46.282108    2316 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0807 17:33:46.323768    2316 ssh_runner.go:195] Run: grep 172.28.235.128	control-plane.minikube.internal$ /etc/hosts
	I0807 17:33:46.330340    2316 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.28.235.128	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0807 17:33:46.364863    2316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 17:33:46.559101    2316 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0807 17:33:46.591056    2316 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600 for IP: 172.28.235.128
	I0807 17:33:46.591116    2316 certs.go:194] generating shared ca certs ...
	I0807 17:33:46.591176    2316 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 17:33:46.591616    2316 certs.go:240] generating "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0807 17:33:46.705606    2316 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt ...
	I0807 17:33:46.705644    2316 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt: {Name:mkb0ebdce3b528a3c449211fdfbba2d86c130c96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 17:33:46.706649    2316 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key ...
	I0807 17:33:46.706649    2316 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key: {Name:mk1ec59eaa4c2f7a35370569c3fc13a80bc1499d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 17:33:46.707801    2316 certs.go:240] generating "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0807 17:33:46.866287    2316 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt ...
	I0807 17:33:46.866287    2316 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt: {Name:mk78efc1a7bd38719c2f7a853f9109f9a1a3252e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 17:33:46.867908    2316 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key ...
	I0807 17:33:46.867908    2316 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key: {Name:mk57de77abeaf23b535083770f5522a07b562b59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 17:33:46.868925    2316 certs.go:256] generating profile certs ...
	I0807 17:33:46.869933    2316 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\client.key
	I0807 17:33:46.869933    2316 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\client.crt with IP's: []
	I0807 17:33:47.193520    2316 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\client.crt ...
	I0807 17:33:47.193520    2316 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\client.crt: {Name:mk2a379d3ed79d670aab6749676d72be2fdab51e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 17:33:47.194750    2316 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\client.key ...
	I0807 17:33:47.195755    2316 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\client.key: {Name:mk77aef3c7d04391383e14c0741cc7801c2a749f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 17:33:47.195888    2316 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\apiserver.key.b94bbf00
	I0807 17:33:47.195888    2316 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\apiserver.crt.b94bbf00 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.28.235.128]
	I0807 17:33:47.813834    2316 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\apiserver.crt.b94bbf00 ...
	I0807 17:33:47.813834    2316 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\apiserver.crt.b94bbf00: {Name:mk3ae3e44f9814e791315e71b5d9fb94cc41259f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 17:33:47.815885    2316 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\apiserver.key.b94bbf00 ...
	I0807 17:33:47.815885    2316 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\apiserver.key.b94bbf00: {Name:mkeaa32c844b31d7e098d23f2af9f5fa7709bf4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 17:33:47.816882    2316 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\apiserver.crt.b94bbf00 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\apiserver.crt
	I0807 17:33:47.828947    2316 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\apiserver.key.b94bbf00 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\apiserver.key
	I0807 17:33:47.829969    2316 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\proxy-client.key
	I0807 17:33:47.829969    2316 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\proxy-client.crt with IP's: []
	I0807 17:33:48.087182    2316 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\proxy-client.crt ...
	I0807 17:33:48.087182    2316 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\proxy-client.crt: {Name:mk19e6138ece3a6ed582e0d97de2c5ac56932c1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 17:33:48.088884    2316 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\proxy-client.key ...
	I0807 17:33:48.088884    2316 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\proxy-client.key: {Name:mk5d7567098b48632b5f123311d077465b06d132 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 17:33:48.101290    2316 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0807 17:33:48.101290    2316 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0807 17:33:48.101290    2316 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0807 17:33:48.102306    2316 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0807 17:33:48.104381    2316 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0807 17:33:48.153303    2316 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0807 17:33:48.198199    2316 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0807 17:33:48.244366    2316 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0807 17:33:48.289938    2316 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0807 17:33:48.338708    2316 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0807 17:33:48.382898    2316 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0807 17:33:48.426349    2316 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0807 17:33:48.469468    2316 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0807 17:33:48.511535    2316 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0807 17:33:48.554809    2316 ssh_runner.go:195] Run: openssl version
	I0807 17:33:48.576748    2316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0807 17:33:48.608975    2316 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0807 17:33:48.615819    2316 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  7 17:33 /usr/share/ca-certificates/minikubeCA.pem
	I0807 17:33:48.627006    2316 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0807 17:33:48.646118    2316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0807 17:33:48.675949    2316 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0807 17:33:48.681913    2316 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0807 17:33:48.681913    2316 kubeadm.go:392] StartCluster: {Name:addons-463600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3
ClusterName:addons-463600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.235.128 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 17:33:48.692657    2316 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0807 17:33:48.731970    2316 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0807 17:33:48.760899    2316 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0807 17:33:48.791861    2316 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0807 17:33:48.809249    2316 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0807 17:33:48.809846    2316 kubeadm.go:157] found existing configuration files:
	
	I0807 17:33:48.821371    2316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0807 17:33:48.838034    2316 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0807 17:33:48.850249    2316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0807 17:33:48.881879    2316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0807 17:33:48.898026    2316 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0807 17:33:48.910252    2316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0807 17:33:48.941560    2316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0807 17:33:48.956526    2316 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0807 17:33:48.969169    2316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0807 17:33:48.997783    2316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0807 17:33:49.012898    2316 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0807 17:33:49.025473    2316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0807 17:33:49.045015    2316 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0807 17:33:49.284202    2316 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0807 17:34:03.482610    2316 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0807 17:34:03.482784    2316 kubeadm.go:310] [preflight] Running pre-flight checks
	I0807 17:34:03.482784    2316 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0807 17:34:03.482784    2316 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0807 17:34:03.483516    2316 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0807 17:34:03.483594    2316 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0807 17:34:03.486236    2316 out.go:204]   - Generating certificates and keys ...
	I0807 17:34:03.486236    2316 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0807 17:34:03.486996    2316 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0807 17:34:03.486996    2316 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0807 17:34:03.486996    2316 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0807 17:34:03.487552    2316 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0807 17:34:03.487552    2316 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0807 17:34:03.487739    2316 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0807 17:34:03.488234    2316 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-463600 localhost] and IPs [172.28.235.128 127.0.0.1 ::1]
	I0807 17:34:03.488397    2316 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0807 17:34:03.488890    2316 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-463600 localhost] and IPs [172.28.235.128 127.0.0.1 ::1]
	I0807 17:34:03.489001    2316 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0807 17:34:03.489186    2316 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0807 17:34:03.489283    2316 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0807 17:34:03.489508    2316 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0807 17:34:03.489508    2316 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0807 17:34:03.489508    2316 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0807 17:34:03.489508    2316 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0807 17:34:03.489508    2316 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0807 17:34:03.490090    2316 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0807 17:34:03.490276    2316 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0807 17:34:03.490463    2316 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0807 17:34:03.494756    2316 out.go:204]   - Booting up control plane ...
	I0807 17:34:03.494756    2316 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0807 17:34:03.495664    2316 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0807 17:34:03.495664    2316 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0807 17:34:03.495664    2316 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0807 17:34:03.495664    2316 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0807 17:34:03.495664    2316 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0807 17:34:03.496689    2316 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0807 17:34:03.496733    2316 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0807 17:34:03.496733    2316 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001575032s
	I0807 17:34:03.496733    2316 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0807 17:34:03.496733    2316 kubeadm.go:310] [api-check] The API server is healthy after 7.502900031s
	I0807 17:34:03.497472    2316 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0807 17:34:03.497472    2316 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0807 17:34:03.497901    2316 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0807 17:34:03.498207    2316 kubeadm.go:310] [mark-control-plane] Marking the node addons-463600 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0807 17:34:03.498207    2316 kubeadm.go:310] [bootstrap-token] Using token: x6dha2.2l7gjvkniyufe51j
	I0807 17:34:03.502087    2316 out.go:204]   - Configuring RBAC rules ...
	I0807 17:34:03.502087    2316 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0807 17:34:03.503327    2316 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0807 17:34:03.503595    2316 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0807 17:34:03.503595    2316 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0807 17:34:03.504346    2316 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0807 17:34:03.504582    2316 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0807 17:34:03.504814    2316 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0807 17:34:03.504814    2316 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0807 17:34:03.504814    2316 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0807 17:34:03.504814    2316 kubeadm.go:310] 
	I0807 17:34:03.504814    2316 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0807 17:34:03.504814    2316 kubeadm.go:310] 
	I0807 17:34:03.504814    2316 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0807 17:34:03.504814    2316 kubeadm.go:310] 
	I0807 17:34:03.504814    2316 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0807 17:34:03.504814    2316 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0807 17:34:03.504814    2316 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0807 17:34:03.504814    2316 kubeadm.go:310] 
	I0807 17:34:03.505840    2316 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0807 17:34:03.505840    2316 kubeadm.go:310] 
	I0807 17:34:03.506005    2316 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0807 17:34:03.506062    2316 kubeadm.go:310] 
	I0807 17:34:03.506206    2316 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0807 17:34:03.506256    2316 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0807 17:34:03.506571    2316 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0807 17:34:03.506571    2316 kubeadm.go:310] 
	I0807 17:34:03.506784    2316 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0807 17:34:03.507019    2316 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0807 17:34:03.507019    2316 kubeadm.go:310] 
	I0807 17:34:03.507019    2316 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token x6dha2.2l7gjvkniyufe51j \
	I0807 17:34:03.507479    2316 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:54111b4769238fc338d0511ba57c177f732a6d68c0b9cd0aa8cacbbf3c79643b \
	I0807 17:34:03.507479    2316 kubeadm.go:310] 	--control-plane 
	I0807 17:34:03.507666    2316 kubeadm.go:310] 
	I0807 17:34:03.507805    2316 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0807 17:34:03.507805    2316 kubeadm.go:310] 
	I0807 17:34:03.508111    2316 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token x6dha2.2l7gjvkniyufe51j \
	I0807 17:34:03.508272    2316 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:54111b4769238fc338d0511ba57c177f732a6d68c0b9cd0aa8cacbbf3c79643b 
	I0807 17:34:03.508272    2316 cni.go:84] Creating CNI manager for ""
	I0807 17:34:03.508272    2316 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0807 17:34:03.510617    2316 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0807 17:34:03.526062    2316 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0807 17:34:03.545028    2316 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0807 17:34:03.578730    2316 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0807 17:34:03.592321    2316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 17:34:03.592321    2316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-463600 minikube.k8s.io/updated_at=2024_08_07T17_34_03_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e minikube.k8s.io/name=addons-463600 minikube.k8s.io/primary=true
	I0807 17:34:03.598309    2316 ops.go:34] apiserver oom_adj: -16
	I0807 17:34:03.723266    2316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 17:34:04.225290    2316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 17:34:04.726187    2316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 17:34:05.226971    2316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 17:34:05.727236    2316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 17:34:06.229570    2316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 17:34:06.732160    2316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 17:34:07.236391    2316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 17:34:07.737120    2316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 17:34:08.245550    2316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 17:34:08.730078    2316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 17:34:09.234420    2316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 17:34:09.732826    2316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 17:34:10.233078    2316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 17:34:10.724233    2316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 17:34:11.227164    2316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 17:34:11.732152    2316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 17:34:12.233824    2316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 17:34:12.723972    2316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 17:34:13.230045    2316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 17:34:13.731094    2316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 17:34:14.237901    2316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 17:34:14.724590    2316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 17:34:15.227496    2316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 17:34:15.733244    2316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 17:34:16.225847    2316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 17:34:16.362393    2316 kubeadm.go:1113] duration metric: took 12.7834981s to wait for elevateKubeSystemPrivileges
	I0807 17:34:16.362484    2316 kubeadm.go:394] duration metric: took 27.6802134s to StartCluster
	I0807 17:34:16.362484    2316 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 17:34:16.362865    2316 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0807 17:34:16.363477    2316 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 17:34:16.365287    2316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0807 17:34:16.365486    2316 start.go:235] Will wait 6m0s for node &{Name: IP:172.28.235.128 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0807 17:34:16.365486    2316 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0807 17:34:16.365648    2316 addons.go:69] Setting helm-tiller=true in profile "addons-463600"
	I0807 17:34:16.365790    2316 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-463600"
	I0807 17:34:16.365790    2316 addons.go:234] Setting addon helm-tiller=true in "addons-463600"
	I0807 17:34:16.365790    2316 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-463600"
	I0807 17:34:16.365918    2316 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-463600"
	I0807 17:34:16.365648    2316 addons.go:69] Setting cloud-spanner=true in profile "addons-463600"
	I0807 17:34:16.366070    2316 addons.go:234] Setting addon cloud-spanner=true in "addons-463600"
	I0807 17:34:16.366162    2316 addons.go:69] Setting ingress-dns=true in profile "addons-463600"
	I0807 17:34:16.366162    2316 config.go:182] Loaded profile config "addons-463600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 17:34:16.366162    2316 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-463600"
	I0807 17:34:16.365918    2316 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-463600"
	I0807 17:34:16.365996    2316 addons.go:69] Setting registry=true in profile "addons-463600"
	I0807 17:34:16.366650    2316 addons.go:234] Setting addon registry=true in "addons-463600"
	I0807 17:34:16.366650    2316 host.go:66] Checking if "addons-463600" exists ...
	I0807 17:34:16.365648    2316 addons.go:69] Setting yakd=true in profile "addons-463600"
	I0807 17:34:16.365648    2316 addons.go:69] Setting default-storageclass=true in profile "addons-463600"
	I0807 17:34:16.366877    2316 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-463600"
	I0807 17:34:16.367103    2316 addons.go:234] Setting addon yakd=true in "addons-463600"
	I0807 17:34:16.365918    2316 addons.go:69] Setting volcano=true in profile "addons-463600"
	I0807 17:34:16.365918    2316 addons.go:69] Setting ingress=true in profile "addons-463600"
	I0807 17:34:16.365918    2316 addons.go:69] Setting volumesnapshots=true in profile "addons-463600"
	I0807 17:34:16.365996    2316 addons.go:69] Setting inspektor-gadget=true in profile "addons-463600"
	I0807 17:34:16.365996    2316 addons.go:69] Setting storage-provisioner=true in profile "addons-463600"
	I0807 17:34:16.365996    2316 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-463600"
	I0807 17:34:16.365996    2316 addons.go:69] Setting metrics-server=true in profile "addons-463600"
	I0807 17:34:16.365996    2316 host.go:66] Checking if "addons-463600" exists ...
	I0807 17:34:16.365648    2316 addons.go:69] Setting gcp-auth=true in profile "addons-463600"
	I0807 17:34:16.366162    2316 addons.go:234] Setting addon ingress-dns=true in "addons-463600"
	I0807 17:34:16.367373    2316 host.go:66] Checking if "addons-463600" exists ...
	I0807 17:34:16.367373    2316 host.go:66] Checking if "addons-463600" exists ...
	I0807 17:34:16.367526    2316 addons.go:234] Setting addon storage-provisioner=true in "addons-463600"
	I0807 17:34:16.367638    2316 host.go:66] Checking if "addons-463600" exists ...
	I0807 17:34:16.367638    2316 addons.go:234] Setting addon volcano=true in "addons-463600"
	I0807 17:34:16.367638    2316 addons.go:234] Setting addon inspektor-gadget=true in "addons-463600"
	I0807 17:34:16.367756    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-463600 ).state
	I0807 17:34:16.367860    2316 host.go:66] Checking if "addons-463600" exists ...
	I0807 17:34:16.367860    2316 addons.go:234] Setting addon volumesnapshots=true in "addons-463600"
	I0807 17:34:16.366735    2316 host.go:66] Checking if "addons-463600" exists ...
	I0807 17:34:16.367526    2316 addons.go:234] Setting addon metrics-server=true in "addons-463600"
	I0807 17:34:16.367638    2316 host.go:66] Checking if "addons-463600" exists ...
	I0807 17:34:16.368198    2316 host.go:66] Checking if "addons-463600" exists ...
	I0807 17:34:16.366162    2316 host.go:66] Checking if "addons-463600" exists ...
	I0807 17:34:16.368056    2316 host.go:66] Checking if "addons-463600" exists ...
	I0807 17:34:16.367638    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-463600 ).state
	I0807 17:34:16.367860    2316 mustload.go:65] Loading cluster: addons-463600
	I0807 17:34:16.367638    2316 addons.go:234] Setting addon ingress=true in "addons-463600"
	I0807 17:34:16.369159    2316 host.go:66] Checking if "addons-463600" exists ...
	I0807 17:34:16.367756    2316 host.go:66] Checking if "addons-463600" exists ...
	I0807 17:34:16.369601    2316 config.go:182] Loaded profile config "addons-463600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 17:34:16.369601    2316 out.go:177] * Verifying Kubernetes components...
	I0807 17:34:16.372251    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-463600 ).state
	I0807 17:34:16.375253    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-463600 ).state
	I0807 17:34:16.375253    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-463600 ).state
	I0807 17:34:16.376264    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-463600 ).state
	I0807 17:34:16.376264    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-463600 ).state
	I0807 17:34:16.376264    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-463600 ).state
	I0807 17:34:16.376264    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-463600 ).state
	I0807 17:34:16.376264    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-463600 ).state
	I0807 17:34:16.377253    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-463600 ).state
	I0807 17:34:16.377253    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-463600 ).state
	I0807 17:34:16.377253    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-463600 ).state
	I0807 17:34:16.378261    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-463600 ).state
	I0807 17:34:16.378261    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-463600 ).state
	I0807 17:34:16.378261    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-463600 ).state
	I0807 17:34:16.416706    2316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 17:34:17.320504    2316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.28.224.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0807 17:34:17.901747    2316 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.4850211s)
	I0807 17:34:17.926154    2316 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0807 17:34:19.808863    2316 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.28.224.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.4883266s)
	I0807 17:34:19.808863    2316 start.go:971] {"host.minikube.internal": 172.28.224.1} host record injected into CoreDNS's ConfigMap
	I0807 17:34:19.815865    2316 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.8891445s)
	I0807 17:34:19.818012    2316 node_ready.go:35] waiting up to 6m0s for node "addons-463600" to be "Ready" ...
	I0807 17:34:20.181333    2316 node_ready.go:49] node "addons-463600" has status "Ready":"True"
	I0807 17:34:20.181333    2316 node_ready.go:38] duration metric: took 362.4664ms for node "addons-463600" to be "Ready" ...
	I0807 17:34:20.181333    2316 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0807 17:34:20.273695    2316 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-2twmx" in "kube-system" namespace to be "Ready" ...
	I0807 17:34:20.468439    2316 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-463600" context rescaled to 1 replicas
	I0807 17:34:22.482301    2316 pod_ready.go:102] pod "coredns-7db6d8ff4d-2twmx" in "kube-system" namespace has status "Ready":"False"
	I0807 17:34:23.450957    2316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:34:23.450957    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:34:23.455930    2316 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-463600"
	I0807 17:34:23.455930    2316 host.go:66] Checking if "addons-463600" exists ...
	I0807 17:34:23.456932    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-463600 ).state
	I0807 17:34:23.457936    2316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:34:23.457936    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:34:23.468937    2316 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0807 17:34:23.473959    2316 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0807 17:34:23.485582    2316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:34:23.485582    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:34:23.487590    2316 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0807 17:34:23.491596    2316 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0807 17:34:23.502597    2316 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0807 17:34:23.502597    2316 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0807 17:34:23.502597    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-463600 ).state
	I0807 17:34:23.505593    2316 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0807 17:34:23.505593    2316 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0807 17:34:23.505593    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-463600 ).state
	I0807 17:34:23.561039    2316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:34:23.561039    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:34:23.578663    2316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:34:23.578663    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:34:23.586664    2316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:34:23.607662    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:34:23.599670    2316 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0807 17:34:23.603665    2316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:34:23.604666    2316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:34:23.612489    2316 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0807 17:34:23.615360    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:34:23.616367    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:34:23.623321    2316 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0807 17:34:23.633058    2316 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0807 17:34:23.633058    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-463600 ).state
	I0807 17:34:23.630099    2316 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0807 17:34:23.631988    2316 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0807 17:34:23.639984    2316 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0807 17:34:23.651608    2316 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0807 17:34:23.684213    2316 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0807 17:34:23.699318    2316 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0807 17:34:23.699318    2316 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0807 17:34:23.699318    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-463600 ).state
	I0807 17:34:23.712403    2316 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0807 17:34:23.712403    2316 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0807 17:34:23.712403    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-463600 ).state
	I0807 17:34:23.724547    2316 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0807 17:34:23.730556    2316 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0807 17:34:23.730556    2316 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0807 17:34:23.730556    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-463600 ).state
	I0807 17:34:23.781198    2316 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0807 17:34:23.814081    2316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:34:23.815076    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:34:23.831078    2316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:34:23.844740    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:34:23.848053    2316 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0807 17:34:23.868889    2316 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0807 17:34:23.868889    2316 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0807 17:34:23.868889    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-463600 ).state
	I0807 17:34:23.875308    2316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:34:23.875308    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:34:23.876422    2316 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.1
	I0807 17:34:23.885598    2316 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0807 17:34:23.899722    2316 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0807 17:34:23.907758    2316 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0807 17:34:23.907758    2316 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0807 17:34:23.917987    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-463600 ).state
	I0807 17:34:23.919990    2316 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0807 17:34:23.919990    2316 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0807 17:34:23.919990    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-463600 ).state
	I0807 17:34:23.937919    2316 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0807 17:34:23.967530    2316 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0807 17:34:24.004738    2316 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0807 17:34:24.028245    2316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:34:24.028245    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:34:24.032229    2316 addons.go:234] Setting addon default-storageclass=true in "addons-463600"
	I0807 17:34:24.032229    2316 host.go:66] Checking if "addons-463600" exists ...
	I0807 17:34:24.035218    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-463600 ).state
	I0807 17:34:24.046224    2316 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0807 17:34:24.062948    2316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:34:24.062948    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:34:24.080420    2316 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0807 17:34:24.087425    2316 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0807 17:34:24.087425    2316 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0807 17:34:24.087425    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-463600 ).state
	I0807 17:34:24.075464    2316 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0807 17:34:24.092465    2316 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0807 17:34:24.092465    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-463600 ).state
	I0807 17:34:24.321585    2316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:34:24.321585    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:34:24.321585    2316 host.go:66] Checking if "addons-463600" exists ...
	I0807 17:34:24.417761    2316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:34:24.417761    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:34:24.425980    2316 out.go:177]   - Using image docker.io/registry:2.8.3
	I0807 17:34:24.439399    2316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:34:24.439399    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:34:24.441450    2316 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0807 17:34:24.444838    2316 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0807 17:34:24.457257    2316 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0807 17:34:24.457257    2316 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0807 17:34:24.457257    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-463600 ).state
	I0807 17:34:24.460274    2316 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0807 17:34:24.460274    2316 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0807 17:34:24.460274    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-463600 ).state
	I0807 17:34:24.926380    2316 pod_ready.go:102] pod "coredns-7db6d8ff4d-2twmx" in "kube-system" namespace has status "Ready":"False"
	I0807 17:34:27.147438    2316 pod_ready.go:102] pod "coredns-7db6d8ff4d-2twmx" in "kube-system" namespace has status "Ready":"False"
	I0807 17:34:29.286264    2316 pod_ready.go:102] pod "coredns-7db6d8ff4d-2twmx" in "kube-system" namespace has status "Ready":"False"
	I0807 17:34:30.122580    2316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:34:30.122580    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:34:30.122580    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-463600 ).networkadapters[0]).ipaddresses[0]
	I0807 17:34:30.174656    2316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:34:30.174656    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:34:30.177669    2316 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0807 17:34:30.193942    2316 out.go:177]   - Using image docker.io/busybox:stable
	I0807 17:34:30.201325    2316 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0807 17:34:30.201325    2316 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0807 17:34:30.201325    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-463600 ).state
	I0807 17:34:30.473368    2316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:34:30.473368    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:34:30.473368    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-463600 ).networkadapters[0]).ipaddresses[0]
	I0807 17:34:30.485345    2316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:34:30.485345    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:34:30.485345    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-463600 ).networkadapters[0]).ipaddresses[0]
	I0807 17:34:30.503443    2316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:34:30.503443    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:34:30.503443    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-463600 ).networkadapters[0]).ipaddresses[0]
	I0807 17:34:30.514356    2316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:34:30.514356    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:34:30.514432    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-463600 ).networkadapters[0]).ipaddresses[0]
	I0807 17:34:30.615682    2316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:34:30.616520    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:34:30.637775    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-463600 ).networkadapters[0]).ipaddresses[0]
	I0807 17:34:30.926608    2316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:34:30.926608    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:34:30.926608    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-463600 ).networkadapters[0]).ipaddresses[0]
	I0807 17:34:31.083055    2316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:34:31.083270    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:34:31.083270    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-463600 ).networkadapters[0]).ipaddresses[0]
	I0807 17:34:31.223621    2316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:34:31.223621    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:34:31.223621    2316 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0807 17:34:31.223621    2316 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0807 17:34:31.223621    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-463600 ).state
	I0807 17:34:31.242536    2316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:34:31.242536    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:34:31.242536    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-463600 ).networkadapters[0]).ipaddresses[0]
	I0807 17:34:31.566555    2316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:34:31.566555    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:34:31.566555    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-463600 ).networkadapters[0]).ipaddresses[0]
	I0807 17:34:31.737202    2316 pod_ready.go:102] pod "coredns-7db6d8ff4d-2twmx" in "kube-system" namespace has status "Ready":"False"
	I0807 17:34:32.218885    2316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:34:32.218885    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:34:32.218885    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-463600 ).networkadapters[0]).ipaddresses[0]
	I0807 17:34:32.241091    2316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:34:32.241091    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:34:32.241091    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-463600 ).networkadapters[0]).ipaddresses[0]
	I0807 17:34:32.276668    2316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:34:32.276668    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:34:32.276668    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-463600 ).networkadapters[0]).ipaddresses[0]
	I0807 17:34:32.446623    2316 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0807 17:34:32.446623    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-463600 ).state
	I0807 17:34:34.188161    2316 pod_ready.go:102] pod "coredns-7db6d8ff4d-2twmx" in "kube-system" namespace has status "Ready":"False"
	I0807 17:34:34.673370    2316 pod_ready.go:92] pod "coredns-7db6d8ff4d-2twmx" in "kube-system" namespace has status "Ready":"True"
	I0807 17:34:34.673370    2316 pod_ready.go:81] duration metric: took 14.3994887s for pod "coredns-7db6d8ff4d-2twmx" in "kube-system" namespace to be "Ready" ...
	I0807 17:34:34.673370    2316 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-pd89b" in "kube-system" namespace to be "Ready" ...
	I0807 17:34:34.980057    2316 pod_ready.go:92] pod "coredns-7db6d8ff4d-pd89b" in "kube-system" namespace has status "Ready":"True"
	I0807 17:34:34.980057    2316 pod_ready.go:81] duration metric: took 306.6826ms for pod "coredns-7db6d8ff4d-pd89b" in "kube-system" namespace to be "Ready" ...
	I0807 17:34:34.980057    2316 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-463600" in "kube-system" namespace to be "Ready" ...
	I0807 17:34:34.999227    2316 pod_ready.go:92] pod "etcd-addons-463600" in "kube-system" namespace has status "Ready":"True"
	I0807 17:34:35.000226    2316 pod_ready.go:81] duration metric: took 20.1689ms for pod "etcd-addons-463600" in "kube-system" namespace to be "Ready" ...
	I0807 17:34:35.000226    2316 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-463600" in "kube-system" namespace to be "Ready" ...
	I0807 17:34:35.340590    2316 pod_ready.go:92] pod "kube-apiserver-addons-463600" in "kube-system" namespace has status "Ready":"True"
	I0807 17:34:35.340590    2316 pod_ready.go:81] duration metric: took 340.3595ms for pod "kube-apiserver-addons-463600" in "kube-system" namespace to be "Ready" ...
	I0807 17:34:35.340590    2316 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-463600" in "kube-system" namespace to be "Ready" ...
	I0807 17:34:35.363599    2316 pod_ready.go:92] pod "kube-controller-manager-addons-463600" in "kube-system" namespace has status "Ready":"True"
	I0807 17:34:35.364606    2316 pod_ready.go:81] duration metric: took 23.0091ms for pod "kube-controller-manager-addons-463600" in "kube-system" namespace to be "Ready" ...
	I0807 17:34:35.364606    2316 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2jg44" in "kube-system" namespace to be "Ready" ...
	I0807 17:34:35.393605    2316 pod_ready.go:92] pod "kube-proxy-2jg44" in "kube-system" namespace has status "Ready":"True"
	I0807 17:34:35.393605    2316 pod_ready.go:81] duration metric: took 28.9987ms for pod "kube-proxy-2jg44" in "kube-system" namespace to be "Ready" ...
	I0807 17:34:35.393605    2316 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-463600" in "kube-system" namespace to be "Ready" ...
	I0807 17:34:35.427596    2316 pod_ready.go:92] pod "kube-scheduler-addons-463600" in "kube-system" namespace has status "Ready":"True"
	I0807 17:34:35.427596    2316 pod_ready.go:81] duration metric: took 33.9904ms for pod "kube-scheduler-addons-463600" in "kube-system" namespace to be "Ready" ...
	I0807 17:34:35.427596    2316 pod_ready.go:38] duration metric: took 15.2460659s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0807 17:34:35.428590    2316 api_server.go:52] waiting for apiserver process to appear ...
	I0807 17:34:35.449304    2316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0807 17:34:35.715263    2316 api_server.go:72] duration metric: took 19.349441s to wait for apiserver process to appear ...
	I0807 17:34:35.715263    2316 api_server.go:88] waiting for apiserver healthz status ...
	I0807 17:34:35.715263    2316 api_server.go:253] Checking apiserver healthz at https://172.28.235.128:8443/healthz ...
	I0807 17:34:35.888483    2316 api_server.go:279] https://172.28.235.128:8443/healthz returned 200:
	ok
	I0807 17:34:35.892602    2316 api_server.go:141] control plane version: v1.30.3
	I0807 17:34:35.892602    2316 api_server.go:131] duration metric: took 177.3366ms to wait for apiserver health ...
	I0807 17:34:35.892602    2316 system_pods.go:43] waiting for kube-system pods to appear ...
	I0807 17:34:35.909663    2316 system_pods.go:59] 7 kube-system pods found
	I0807 17:34:35.909663    2316 system_pods.go:61] "coredns-7db6d8ff4d-2twmx" [c4f535c9-24de-46ad-9574-72ab0e4d2fe5] Running
	I0807 17:34:35.909663    2316 system_pods.go:61] "coredns-7db6d8ff4d-pd89b" [b59ad0cf-0411-4b95-bd44-0c4353855f1d] Running
	I0807 17:34:35.909663    2316 system_pods.go:61] "etcd-addons-463600" [62fc31bb-8c40-406e-858a-efc7c72d72f9] Running
	I0807 17:34:35.909663    2316 system_pods.go:61] "kube-apiserver-addons-463600" [42957e5c-f511-4ba1-8250-07fe38feb30c] Running
	I0807 17:34:35.909663    2316 system_pods.go:61] "kube-controller-manager-addons-463600" [e16fcbe7-542c-42c6-ad9c-1a3d4f650645] Running
	I0807 17:34:35.909663    2316 system_pods.go:61] "kube-proxy-2jg44" [281c9afd-9a83-47ee-94cb-1a4f39ed0d2d] Running
	I0807 17:34:35.909663    2316 system_pods.go:61] "kube-scheduler-addons-463600" [9b3f5586-88a9-482c-a6cb-88a2b169462d] Running
	I0807 17:34:35.909663    2316 system_pods.go:74] duration metric: took 17.0609ms to wait for pod list to return data ...
	I0807 17:34:35.909663    2316 default_sa.go:34] waiting for default service account to be created ...
	I0807 17:34:35.918666    2316 default_sa.go:45] found service account: "default"
	I0807 17:34:35.918666    2316 default_sa.go:55] duration metric: took 9.0024ms for default service account to be created ...
	I0807 17:34:35.918666    2316 system_pods.go:116] waiting for k8s-apps to be running ...
	I0807 17:34:36.019350    2316 system_pods.go:86] 7 kube-system pods found
	I0807 17:34:36.019350    2316 system_pods.go:89] "coredns-7db6d8ff4d-2twmx" [c4f535c9-24de-46ad-9574-72ab0e4d2fe5] Running
	I0807 17:34:36.019350    2316 system_pods.go:89] "coredns-7db6d8ff4d-pd89b" [b59ad0cf-0411-4b95-bd44-0c4353855f1d] Running
	I0807 17:34:36.019350    2316 system_pods.go:89] "etcd-addons-463600" [62fc31bb-8c40-406e-858a-efc7c72d72f9] Running
	I0807 17:34:36.019350    2316 system_pods.go:89] "kube-apiserver-addons-463600" [42957e5c-f511-4ba1-8250-07fe38feb30c] Running
	I0807 17:34:36.019350    2316 system_pods.go:89] "kube-controller-manager-addons-463600" [e16fcbe7-542c-42c6-ad9c-1a3d4f650645] Running
	I0807 17:34:36.019350    2316 system_pods.go:89] "kube-proxy-2jg44" [281c9afd-9a83-47ee-94cb-1a4f39ed0d2d] Running
	I0807 17:34:36.019350    2316 system_pods.go:89] "kube-scheduler-addons-463600" [9b3f5586-88a9-482c-a6cb-88a2b169462d] Running
	I0807 17:34:36.019350    2316 system_pods.go:126] duration metric: took 100.6832ms to wait for k8s-apps to be running ...
	I0807 17:34:36.019350    2316 system_svc.go:44] waiting for kubelet service to be running ....
	I0807 17:34:36.041603    2316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0807 17:34:36.188909    2316 system_svc.go:56] duration metric: took 169.5566ms WaitForService to wait for kubelet
	I0807 17:34:36.189894    2316 kubeadm.go:582] duration metric: took 19.8230807s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0807 17:34:36.189894    2316 node_conditions.go:102] verifying NodePressure condition ...
	I0807 17:34:36.226213    2316 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0807 17:34:36.227214    2316 node_conditions.go:123] node cpu capacity is 2
	I0807 17:34:36.227214    2316 node_conditions.go:105] duration metric: took 37.3192ms to run NodePressure ...
	I0807 17:34:36.227214    2316 start.go:241] waiting for startup goroutines ...
	I0807 17:34:36.899357    2316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:34:36.899357    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:34:36.899357    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-463600 ).networkadapters[0]).ipaddresses[0]
	I0807 17:34:37.971199    2316 main.go:141] libmachine: [stdout =====>] : 172.28.235.128
	
	I0807 17:34:37.971199    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:34:37.972201    2316 sshutil.go:53] new ssh client: &{IP:172.28.235.128 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-463600\id_rsa Username:docker}
	I0807 17:34:38.082474    2316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:34:38.082474    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:34:38.082474    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-463600 ).networkadapters[0]).ipaddresses[0]
	I0807 17:34:38.241819    2316 main.go:141] libmachine: [stdout =====>] : 172.28.235.128
	
	I0807 17:34:38.241819    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:34:38.241819    2316 sshutil.go:53] new ssh client: &{IP:172.28.235.128 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-463600\id_rsa Username:docker}
	I0807 17:34:38.301717    2316 main.go:141] libmachine: [stdout =====>] : 172.28.235.128
	
	I0807 17:34:38.301717    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:34:38.302637    2316 sshutil.go:53] new ssh client: &{IP:172.28.235.128 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-463600\id_rsa Username:docker}
	I0807 17:34:38.362509    2316 main.go:141] libmachine: [stdout =====>] : 172.28.235.128
	
	I0807 17:34:38.362509    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:34:38.363632    2316 sshutil.go:53] new ssh client: &{IP:172.28.235.128 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-463600\id_rsa Username:docker}
	I0807 17:34:38.443834    2316 main.go:141] libmachine: [stdout =====>] : 172.28.235.128
	
	I0807 17:34:38.443920    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:34:38.445049    2316 sshutil.go:53] new ssh client: &{IP:172.28.235.128 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-463600\id_rsa Username:docker}
	I0807 17:34:38.456695    2316 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0807 17:34:38.456695    2316 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0807 17:34:38.533486    2316 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0807 17:34:38.534493    2316 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0807 17:34:38.550156    2316 main.go:141] libmachine: [stdout =====>] : 172.28.235.128
	
	I0807 17:34:38.550156    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:34:38.551999    2316 sshutil.go:53] new ssh client: &{IP:172.28.235.128 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-463600\id_rsa Username:docker}
	I0807 17:34:38.585469    2316 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0807 17:34:38.585595    2316 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0807 17:34:38.610994    2316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:34:38.611938    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:34:38.612009    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-463600 ).networkadapters[0]).ipaddresses[0]
	I0807 17:34:38.633200    2316 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0807 17:34:38.633200    2316 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0807 17:34:38.659784    2316 main.go:141] libmachine: [stdout =====>] : 172.28.235.128
	
	I0807 17:34:38.659850    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:34:38.661114    2316 sshutil.go:53] new ssh client: &{IP:172.28.235.128 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-463600\id_rsa Username:docker}
	I0807 17:34:38.774702    2316 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0807 17:34:38.942115    2316 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0807 17:34:38.942115    2316 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0807 17:34:39.044265    2316 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0807 17:34:39.081650    2316 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0807 17:34:39.081650    2316 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0807 17:34:39.100437    2316 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0807 17:34:39.100437    2316 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0807 17:34:39.369451    2316 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0807 17:34:39.403440    2316 main.go:141] libmachine: [stdout =====>] : 172.28.235.128
	
	I0807 17:34:39.403440    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:34:39.404430    2316 sshutil.go:53] new ssh client: &{IP:172.28.235.128 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-463600\id_rsa Username:docker}
	I0807 17:34:39.409431    2316 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0807 17:34:39.468461    2316 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0807 17:34:39.480380    2316 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0807 17:34:39.480380    2316 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0807 17:34:39.483574    2316 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0807 17:34:39.483737    2316 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0807 17:34:39.493401    2316 main.go:141] libmachine: [stdout =====>] : 172.28.235.128
	
	I0807 17:34:39.493401    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:34:39.494413    2316 sshutil.go:53] new ssh client: &{IP:172.28.235.128 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-463600\id_rsa Username:docker}
	I0807 17:34:39.604624    2316 main.go:141] libmachine: [stdout =====>] : 172.28.235.128
	
	I0807 17:34:39.605508    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:34:39.606308    2316 sshutil.go:53] new ssh client: &{IP:172.28.235.128 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-463600\id_rsa Username:docker}
	I0807 17:34:39.667718    2316 main.go:141] libmachine: [stdout =====>] : 172.28.235.128
	
	I0807 17:34:39.667789    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:34:39.668623    2316 sshutil.go:53] new ssh client: &{IP:172.28.235.128 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-463600\id_rsa Username:docker}
	I0807 17:34:39.752947    2316 main.go:141] libmachine: [stdout =====>] : 172.28.235.128
	
	I0807 17:34:39.752947    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:34:39.753936    2316 sshutil.go:53] new ssh client: &{IP:172.28.235.128 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-463600\id_rsa Username:docker}
	I0807 17:34:39.771213    2316 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0807 17:34:39.771300    2316 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0807 17:34:39.779517    2316 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0807 17:34:39.779635    2316 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0807 17:34:39.830507    2316 main.go:141] libmachine: [stdout =====>] : 172.28.235.128
	
	I0807 17:34:39.830933    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:34:39.833557    2316 sshutil.go:53] new ssh client: &{IP:172.28.235.128 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-463600\id_rsa Username:docker}
	I0807 17:34:39.945914    2316 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0807 17:34:40.001391    2316 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0807 17:34:40.001391    2316 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0807 17:34:40.027228    2316 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0807 17:34:40.027366    2316 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0807 17:34:40.228008    2316 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0807 17:34:40.229016    2316 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0807 17:34:40.341277    2316 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0807 17:34:40.341277    2316 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0807 17:34:40.357846    2316 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0807 17:34:40.357901    2316 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0807 17:34:40.453157    2316 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0807 17:34:40.453157    2316 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0807 17:34:40.460139    2316 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0807 17:34:40.516171    2316 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0807 17:34:40.516171    2316 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0807 17:34:40.551568    2316 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0807 17:34:40.551568    2316 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0807 17:34:40.590551    2316 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0807 17:34:40.691896    2316 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0807 17:34:40.692015    2316 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0807 17:34:40.778714    2316 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0807 17:34:40.779330    2316 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0807 17:34:40.780347    2316 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0807 17:34:40.835729    2316 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0807 17:34:40.835808    2316 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0807 17:34:40.877788    2316 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0807 17:34:40.877951    2316 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0807 17:34:40.984628    2316 main.go:141] libmachine: [stdout =====>] : 172.28.235.128
	
	I0807 17:34:40.984628    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:34:40.985438    2316 sshutil.go:53] new ssh client: &{IP:172.28.235.128 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-463600\id_rsa Username:docker}
	I0807 17:34:41.073426    2316 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0807 17:34:41.073426    2316 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0807 17:34:41.115976    2316 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0807 17:34:41.116095    2316 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0807 17:34:41.163755    2316 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0807 17:34:41.163755    2316 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0807 17:34:41.248194    2316 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0807 17:34:41.288443    2316 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0807 17:34:41.288443    2316 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0807 17:34:41.381135    2316 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0807 17:34:41.381135    2316 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0807 17:34:41.412146    2316 main.go:141] libmachine: [stdout =====>] : 172.28.235.128
	
	I0807 17:34:41.412146    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:34:41.413351    2316 sshutil.go:53] new ssh client: &{IP:172.28.235.128 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-463600\id_rsa Username:docker}
	I0807 17:34:41.449971    2316 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0807 17:34:41.449971    2316 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0807 17:34:41.636720    2316 main.go:141] libmachine: [stdout =====>] : 172.28.235.128
	
	I0807 17:34:41.637023    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:34:41.637694    2316 sshutil.go:53] new ssh client: &{IP:172.28.235.128 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-463600\id_rsa Username:docker}
	I0807 17:34:41.692407    2316 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0807 17:34:41.692488    2316 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0807 17:34:41.765837    2316 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0807 17:34:41.765837    2316 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0807 17:34:41.800166    2316 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0807 17:34:41.865728    2316 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0807 17:34:42.089530    2316 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0807 17:34:42.089530    2316 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0807 17:34:42.183527    2316 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0807 17:34:42.210933    2316 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0807 17:34:42.210966    2316 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0807 17:34:42.469080    2316 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0807 17:34:42.612526    2316 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0807 17:34:42.648543    2316 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0807 17:34:42.803528    2316 addons.go:234] Setting addon gcp-auth=true in "addons-463600"
	I0807 17:34:42.803528    2316 host.go:66] Checking if "addons-463600" exists ...
	I0807 17:34:42.805569    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-463600 ).state
	I0807 17:34:43.221441    2316 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (4.4466805s)
	I0807 17:34:45.114290    2316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:34:45.114965    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:34:45.129762    2316 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0807 17:34:45.129762    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-463600 ).state
	I0807 17:34:47.555545    2316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:34:47.556207    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:34:47.556207    2316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-463600 ).networkadapters[0]).ipaddresses[0]
	I0807 17:34:50.545924    2316 main.go:141] libmachine: [stdout =====>] : 172.28.235.128
	
	I0807 17:34:50.545924    2316 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:34:50.547212    2316 sshutil.go:53] new ssh client: &{IP:172.28.235.128 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-463600\id_rsa Username:docker}
	I0807 17:34:54.788005    2316 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (15.7434837s)
	I0807 17:34:54.788142    2316 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (15.4183879s)
	I0807 17:34:54.788320    2316 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (15.3786373s)
	I0807 17:34:54.788349    2316 addons.go:475] Verifying addon metrics-server=true in "addons-463600"
	I0807 17:34:54.788349    2316 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (15.3196895s)
	I0807 17:34:54.788349    2316 addons.go:475] Verifying addon ingress=true in "addons-463600"
	I0807 17:34:54.788349    2316 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (14.8422426s)
	I0807 17:34:54.788349    2316 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (14.3280243s)
	I0807 17:34:54.788349    2316 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (14.1976148s)
	I0807 17:34:54.788349    2316 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (14.0094542s)
	I0807 17:34:54.788911    2316 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (13.5405413s)
	I0807 17:34:54.788967    2316 addons.go:475] Verifying addon registry=true in "addons-463600"
	I0807 17:34:54.788967    2316 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (12.9886333s)
	W0807 17:34:54.788911    2316 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0807 17:34:54.793398    2316 retry.go:31] will retry after 305.965789ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0807 17:34:54.789151    2316 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (12.9232122s)
	I0807 17:34:54.789151    2316 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (12.6054612s)
	I0807 17:34:54.793398    2316 out.go:177] * Verifying ingress addon...
	I0807 17:34:54.797378    2316 out.go:177] * Verifying registry addon...
	I0807 17:34:54.799376    2316 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-463600 service yakd-dashboard -n yakd-dashboard
	
	I0807 17:34:54.806796    2316 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0807 17:34:54.810900    2316 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0807 17:34:54.888113    2316 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0807 17:34:54.888113    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:34:54.890114    2316 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0807 17:34:54.890198    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0807 17:34:54.921566    2316 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0807 17:34:55.132144    2316 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0807 17:34:55.463595    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:34:55.485725    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:34:55.853148    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:34:55.868768    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:34:56.404054    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:34:56.414424    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:34:56.475809    2316 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (13.8270874s)
	I0807 17:34:56.475898    2316 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (11.3459899s)
	I0807 17:34:56.478986    2316 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0807 17:34:56.480745    2316 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (13.8674547s)
	I0807 17:34:56.480862    2316 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-463600"
	I0807 17:34:56.488438    2316 out.go:177] * Verifying csi-hostpath-driver addon...
	I0807 17:34:56.491440    2316 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0807 17:34:56.493439    2316 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0807 17:34:56.493439    2316 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0807 17:34:56.493439    2316 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0807 17:34:56.602438    2316 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0807 17:34:56.602438    2316 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0807 17:34:56.614113    2316 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0807 17:34:56.614172    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:34:56.692213    2316 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0807 17:34:56.692213    2316 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0807 17:34:56.856296    2316 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0807 17:34:56.873700    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:34:56.874821    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:34:57.009498    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:34:57.321898    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:34:57.328750    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:34:57.512609    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:34:57.828659    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:34:57.831226    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:34:58.015983    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:34:58.146876    2316 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.0146096s)
	I0807 17:34:58.332274    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:34:58.334221    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:34:58.506900    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:34:58.826234    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:34:58.832695    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:34:59.036927    2316 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.1806032s)
	I0807 17:34:59.044946    2316 addons.go:475] Verifying addon gcp-auth=true in "addons-463600"
	I0807 17:34:59.048389    2316 out.go:177] * Verifying gcp-auth addon...
	I0807 17:34:59.053167    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:34:59.058310    2316 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0807 17:34:59.073196    2316 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0807 17:34:59.329106    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:34:59.333118    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:34:59.504136    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:34:59.820874    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:34:59.822847    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:00.023057    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:00.325475    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:00.326367    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:00.502535    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:00.819599    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:00.822580    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:01.013394    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:01.325363    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:01.327381    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:01.514801    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:01.830357    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:01.832326    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:02.004875    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:02.322925    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:02.322925    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:02.512588    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:02.829558    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:02.829757    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:03.003922    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:03.323532    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:03.323889    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:03.514539    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:03.830985    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:03.831770    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:04.001241    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:04.316549    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:04.321491    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:04.510010    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:04.822706    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:04.822706    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:05.014145    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:05.316706    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:05.324343    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:05.505582    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:05.824003    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:05.824508    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:06.010922    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:06.324282    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:06.325727    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:06.518952    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:06.832249    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:06.832524    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:07.010101    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:07.321752    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:07.321752    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:07.510764    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:07.824502    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:07.826495    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:08.017739    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:08.326074    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:08.330371    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:08.507479    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:08.824217    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:08.825648    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:09.013676    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:09.315692    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:09.321322    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:09.515419    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:09.824366    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:09.825099    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:10.017180    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:10.331570    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:10.333745    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:10.505536    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:10.820966    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:10.822940    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:11.013815    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:11.330583    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:11.333360    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:11.505235    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:11.818759    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:11.822150    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:12.012387    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:12.330183    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:12.331797    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:12.504540    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:12.820942    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:12.820942    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:13.011456    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:13.325469    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:13.327443    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:13.519071    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:13.829149    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:13.831144    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:14.311141    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:14.320509    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:14.402916    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:14.747006    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:14.919772    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:14.925704    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:15.142301    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:15.319718    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:15.323740    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:15.507964    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:15.864701    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:15.890696    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:16.025030    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:16.350886    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:16.350999    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:16.519121    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:16.824777    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:16.828147    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:17.009419    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:17.332343    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:17.333007    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:17.503271    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:17.816860    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:17.834735    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:18.009105    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:18.323513    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:18.323677    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:18.512121    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:18.827794    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:18.828495    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:19.053961    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:19.325755    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:19.326508    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:19.517492    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:19.828321    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:19.830913    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:20.013701    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:20.333651    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:20.337013    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:20.503679    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:20.830675    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:20.831789    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:21.014889    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:21.330691    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:21.330691    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:21.505281    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:21.819321    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:21.821315    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:22.011260    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:22.325407    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:22.325530    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:22.516208    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:22.829504    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:22.830037    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:23.003138    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:23.316027    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:23.320528    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:23.505460    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:23.822547    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:23.822547    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:24.014086    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:24.331345    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:24.331948    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:24.507571    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:24.820949    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:24.823377    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:25.011925    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:25.320808    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:25.322461    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:25.510384    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:25.831061    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:25.831061    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:26.002050    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:26.323092    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:26.323239    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:26.544824    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:26.834044    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:26.835004    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:27.188746    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:27.331388    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:27.334182    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:27.546366    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:27.863180    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:27.864174    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:28.004415    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:28.321982    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:28.323074    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:28.523376    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:28.824645    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:28.826708    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:29.060595    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:29.320910    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:29.320910    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:29.510013    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:29.828026    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:29.828026    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:30.001227    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:30.331838    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:30.333494    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:30.505549    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:30.823360    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:30.823732    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:31.012244    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:31.326715    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:31.328558    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:31.603197    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:31.920267    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:31.921367    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:32.060741    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:32.327957    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:32.328045    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:32.519409    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:32.819902    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:32.825167    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:33.012678    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:33.336604    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:33.340183    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:33.507310    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:33.820131    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:33.821127    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:34.011526    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:34.323190    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:34.326174    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:34.515706    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:34.830992    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:34.831214    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:35.008654    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:35.327348    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:35.327709    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:35.527550    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:35.816496    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:35.821717    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:36.006188    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:36.322427    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:36.323213    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:36.509590    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:36.831803    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:36.833997    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:37.005198    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:37.325055    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:37.327878    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:37.508848    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:37.822657    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:37.823659    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:38.017548    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:38.330977    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:38.331923    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:38.503599    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:38.830449    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:38.831674    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:39.013711    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:39.333911    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:39.335446    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:39.504975    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:39.820722    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:39.825242    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:40.009972    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:40.326566    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:40.326566    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:40.503475    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:40.824657    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:40.825097    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:41.021766    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:41.332865    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:41.333293    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:41.503138    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:41.820799    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:41.823602    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:42.012433    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:42.327256    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:42.327625    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:42.502977    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:42.820678    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:42.820735    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:43.012597    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:43.320254    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:43.320541    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:43.509744    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:43.901107    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:43.903076    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:44.065719    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:44.322834    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:44.323074    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:44.511002    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:44.823410    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:44.825047    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:45.011619    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:45.326699    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:45.326927    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:45.518101    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:45.819219    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:45.823222    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:46.007528    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:46.327388    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:46.328020    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:46.519214    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:46.857848    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:46.863873    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:47.020492    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:47.333106    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:47.346691    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:47.530241    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:47.838677    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:47.838677    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:48.004935    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:48.322086    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:48.323288    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:48.509752    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:48.824576    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:48.825144    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:49.015675    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:49.329352    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:49.330440    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:49.504941    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:49.819911    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:49.821469    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:50.011734    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:50.330184    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:50.330184    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:50.504509    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:50.820186    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:50.823173    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:51.012628    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:51.329188    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:51.332446    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:51.504072    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:51.819237    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:51.819973    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:52.012145    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:53.590894    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:53.592896    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:53.594089    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:53.637726    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:53.641467    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:53.644296    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:53.821598    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:53.826197    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:54.012901    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:54.328983    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:54.329123    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:54.520972    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:54.819286    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:54.819936    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:55.009813    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:55.326699    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:55.327403    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:55.514611    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:55.828481    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:55.828816    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:56.015504    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:56.330013    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:56.330585    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:56.502578    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:56.831318    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:56.832938    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:57.005489    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:57.326879    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:57.326879    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:57.560548    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:57.836963    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:57.844793    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:58.026762    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:58.331270    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:58.331436    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:58.503854    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:58.829744    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:58.829744    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:59.011443    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:59.319308    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:59.323277    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:35:59.511159    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:35:59.823800    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:35:59.827031    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:00.015535    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:00.332135    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:36:00.332710    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:00.503223    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:00.819446    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:36:00.823492    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:01.011289    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:01.323623    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:36:01.324197    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:01.514015    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:01.829300    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:36:01.829300    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:02.004714    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:02.321910    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:36:02.322000    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:02.516735    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:02.830172    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:36:02.830998    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:03.005057    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:03.319327    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:03.322227    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:36:03.508946    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:03.824772    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:36:03.825616    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:04.016788    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:04.329380    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:36:04.329380    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:04.503998    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:04.822763    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:04.824270    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:36:05.009407    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:05.327438    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:36:05.328568    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:05.516755    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:05.821243    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:36:05.826327    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:06.008289    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:06.351310    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:06.355953    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:36:06.514651    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:06.832038    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:36:06.832164    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:07.003471    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:07.319042    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:07.321282    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:36:07.513157    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:07.825321    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:07.827964    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:36:08.011964    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:08.713046    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:08.722665    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:36:08.901188    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:08.905012    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:08.905683    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:36:09.007364    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:09.321234    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:36:09.321584    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:09.508268    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:09.830514    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:36:09.833787    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:10.005298    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:10.712316    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:36:10.712316    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:10.716067    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:10.858159    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:10.863038    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:36:11.007087    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:11.323417    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:36:11.324095    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:11.511187    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:11.823226    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:36:11.829930    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:12.013248    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:12.342164    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:12.344178    2316 kapi.go:107] duration metric: took 1m17.5363821s to wait for kubernetes.io/minikube-addons=registry ...
	I0807 17:36:12.507783    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:12.835943    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:13.013833    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:13.329991    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:13.506600    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:13.825567    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:14.015633    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:14.332634    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:14.508320    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:14.823635    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:15.014297    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:15.330957    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:15.508570    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:15.822972    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:16.013487    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:16.329339    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:16.509596    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:16.820173    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:17.013476    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:17.329587    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:17.506650    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:17.824053    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:18.002586    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:18.334329    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:18.508817    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:18.958901    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:19.013804    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:19.328805    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:19.506568    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:20.297748    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:20.299501    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:20.333135    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:20.506979    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:20.830307    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:21.401820    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:21.401820    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:21.516111    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:21.835526    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:22.007573    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:22.324643    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:22.524522    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:22.829513    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:23.006284    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:23.323847    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:23.515412    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:23.833890    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:24.013778    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:24.325779    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:24.516547    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:24.824606    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:25.019478    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:25.325713    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:25.504811    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:25.837046    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:26.009009    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:26.324829    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:26.515407    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:26.829618    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:27.005813    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:27.319806    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:27.511830    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:27.827831    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:28.004249    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:28.330543    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:28.502770    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:28.833487    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:29.008728    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:29.323040    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:29.512603    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:30.137412    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:30.137760    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:30.323395    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:30.513846    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:30.820583    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:31.014898    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:31.321506    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:31.514270    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:31.831757    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:32.012860    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:32.318583    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:32.511050    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:32.826869    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:33.004753    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:33.321005    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:33.512573    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:33.831321    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:34.003320    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:34.321273    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:34.513135    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:34.830392    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:35.007460    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:35.339602    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:35.512392    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:35.824865    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:36.015672    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:36.328568    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:36.505447    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:36.819644    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:37.012611    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:37.329158    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:37.506311    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:37.820752    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:38.397966    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:38.398541    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:38.508439    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:38.835317    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:39.016956    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:39.325821    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:39.519034    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:39.831425    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:40.006692    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:40.321064    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:40.510152    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:40.826316    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:41.018178    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:41.333945    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:41.526540    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:41.821347    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:42.192009    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:42.325486    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:42.518664    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:42.827838    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:43.022166    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:43.328777    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:43.507287    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:43.821636    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:44.013503    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:44.328239    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:44.508799    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:44.820130    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:45.012709    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:45.328638    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:45.503924    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:45.821267    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:46.009240    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:46.327030    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:46.502833    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:46.833865    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:47.013365    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:47.327499    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:47.530428    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:48.623862    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:48.625661    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:48.629999    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:48.651655    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:48.844547    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:49.014541    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:49.326262    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:49.522805    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:49.820536    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:50.104018    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:50.327216    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:50.519955    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:50.821827    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:51.013484    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:51.329932    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:51.506358    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:51.822977    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:52.015316    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:52.328881    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:52.510873    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:52.835818    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:53.008830    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:53.324799    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:53.513038    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:53.827370    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:54.011783    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:54.321182    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:54.514405    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:54.829948    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:55.004273    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:55.322457    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:55.510348    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:55.823780    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:56.015479    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:56.708581    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:56.709456    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:56.825422    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:57.012735    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:57.327558    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:57.517649    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:57.827190    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:58.004889    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:58.331882    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:58.505552    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:58.823553    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:59.014552    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:59.329347    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:36:59.504545    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:36:59.840190    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:37:00.010910    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:00.321561    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:37:00.517407    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:00.827321    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:37:01.018704    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:01.328444    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:37:01.519337    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:01.818436    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:37:02.007823    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:02.322055    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:37:02.516188    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:02.829871    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:37:03.005999    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:03.325769    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:37:03.519891    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:03.834802    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:37:04.011805    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:04.326084    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:37:04.515205    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:04.826154    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:37:05.016586    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:05.341770    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:37:05.510050    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:05.830288    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:37:06.007237    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:06.332829    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:37:06.512136    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:06.832107    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:37:07.025583    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:07.319689    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:37:07.523663    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:07.824491    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:37:08.015792    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:08.329881    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:37:08.506596    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:08.820650    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:37:09.012022    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:09.329521    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:37:09.507161    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:09.833019    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:37:10.014900    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:10.323565    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:37:10.521728    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:10.831610    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:37:11.012636    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:11.328500    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:37:11.515748    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:11.831077    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:37:12.003970    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:12.321257    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:37:12.514519    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:12.830728    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:37:13.010553    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:13.323962    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:37:13.514182    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:13.831500    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:37:14.158047    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:14.333038    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:37:14.509560    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:14.826223    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:37:15.043582    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:15.328002    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:37:15.504773    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:15.832998    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:37:16.010854    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:16.334291    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:37:16.532099    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:16.835110    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:37:17.009681    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:17.324731    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:37:17.517830    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:17.830773    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:37:18.005475    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:18.322517    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:37:18.517254    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:18.831645    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:37:19.008476    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:19.325645    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:37:19.519478    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:20.044093    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:20.044932    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:37:20.323358    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:37:20.512045    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:20.833466    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:37:21.013525    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:21.322857    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:37:21.515918    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:21.828945    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:37:22.004287    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:22.333465    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:37:22.509288    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:22.827215    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:37:23.004335    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:23.322722    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:37:23.517247    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:23.832413    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:37:24.008817    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:24.326102    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:37:24.517838    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:24.833020    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:37:25.009805    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:25.324083    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:37:25.516285    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:25.828066    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:37:26.004538    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:26.322188    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:37:26.514533    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:26.832264    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:37:27.003336    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:27.333288    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:37:27.512768    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:27.823714    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:37:28.043642    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:28.329673    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:37:28.511135    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:29.358779    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:37:29.362651    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:29.400770    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:37:29.702582    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:29.829591    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:37:30.027477    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:30.322802    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:37:30.514117    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:30.827900    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:37:31.022084    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:31.333188    2316 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:37:31.512697    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:31.822400    2316 kapi.go:107] duration metric: took 2m37.009475s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0807 17:37:32.016085    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:32.509583    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:33.018273    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:33.522148    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:34.005493    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:34.512193    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:35.022322    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:35.510801    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:36.027173    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:36.512191    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:37.003017    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:37.511412    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:38.011609    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:38.510327    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:39.004770    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:39.511379    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:40.010934    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:40.543823    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:41.028706    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:41.516704    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:42.016298    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:42.504391    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:43.034577    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:43.088104    2316 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0807 17:37:43.088171    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:37:43.503195    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:43.578946    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:37:44.257265    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:37:44.258214    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:44.510127    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:44.579197    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:37:45.016084    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:37:45.071873    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:37:45.519788    2316 kapi.go:107] duration metric: took 2m49.0241069s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0807 17:37:45.576969    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:37:46.080505    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:37:46.568592    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:37:47.073377    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:37:47.572149    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:37:48.069336    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:37:48.571339    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:37:49.068718    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:37:49.580820    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:37:50.080375    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:37:50.580278    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:37:51.066784    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:37:51.578444    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:37:52.067769    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:37:52.580850    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:37:53.076094    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:37:53.577725    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:37:54.076618    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:37:54.575483    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:37:55.071216    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:37:55.567596    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:37:56.069419    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:37:56.581672    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:37:57.073333    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:37:57.570441    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:37:58.069525    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:37:58.574113    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:37:59.072411    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:37:59.570543    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:00.071741    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:00.571515    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:01.068804    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:01.566119    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:02.068645    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:02.579170    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:03.075041    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:03.569323    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:04.077573    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:04.581596    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:05.079319    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:05.573787    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:06.076598    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:06.580709    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:07.070307    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:07.569655    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:08.071069    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:08.579716    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:09.067919    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:09.576258    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:10.079380    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:10.569714    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:11.071578    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:11.576822    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:12.076126    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:12.575326    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:13.075029    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:13.577762    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:14.078803    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:14.577569    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:15.078767    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:15.566049    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:16.083436    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:16.578146    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:17.089888    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:17.571239    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:18.079083    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:18.567778    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:19.150250    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:19.588117    2316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:20.069203    2316 kapi.go:107] duration metric: took 3m21.0083s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0807 17:38:20.073036    2316 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-463600 cluster.
	I0807 17:38:20.075851    2316 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0807 17:38:20.078084    2316 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0807 17:38:20.081053    2316 out.go:177] * Enabled addons: helm-tiller, volcano, storage-provisioner, metrics-server, ingress-dns, cloud-spanner, nvidia-device-plugin, yakd, default-storageclass, inspektor-gadget, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0807 17:38:20.085956    2316 addons.go:510] duration metric: took 4m3.7173245s for enable addons: enabled=[helm-tiller volcano storage-provisioner metrics-server ingress-dns cloud-spanner nvidia-device-plugin yakd default-storageclass inspektor-gadget volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0807 17:38:20.085956    2316 start.go:246] waiting for cluster config update ...
	I0807 17:38:20.085956    2316 start.go:255] writing updated cluster config ...
	I0807 17:38:20.099028    2316 ssh_runner.go:195] Run: rm -f paused
	I0807 17:38:20.368976    2316 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0807 17:38:20.375244    2316 out.go:177] * Done! kubectl is now configured to use "addons-463600" cluster and "default" namespace by default
	
	
	==> Docker <==
	Aug 07 17:41:18 addons-463600 dockerd[1438]: time="2024-08-07T17:41:18.051469503Z" level=info msg="shim disconnected" id=94913441df5d69a9eec0df8c005a26d5a6bd1cad84e1653091326840b2908bab namespace=moby
	Aug 07 17:41:18 addons-463600 dockerd[1438]: time="2024-08-07T17:41:18.053356625Z" level=warning msg="cleaning up after shim disconnected" id=94913441df5d69a9eec0df8c005a26d5a6bd1cad84e1653091326840b2908bab namespace=moby
	Aug 07 17:41:18 addons-463600 dockerd[1438]: time="2024-08-07T17:41:18.053415025Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:41:18 addons-463600 dockerd[1438]: time="2024-08-07T17:41:18.059374295Z" level=info msg="shim disconnected" id=3e945c7d1c5e067baf60d14e498ca9cd8aa1ab75db827147d84087d106e40199 namespace=moby
	Aug 07 17:41:18 addons-463600 dockerd[1438]: time="2024-08-07T17:41:18.059436596Z" level=warning msg="cleaning up after shim disconnected" id=3e945c7d1c5e067baf60d14e498ca9cd8aa1ab75db827147d84087d106e40199 namespace=moby
	Aug 07 17:41:18 addons-463600 dockerd[1438]: time="2024-08-07T17:41:18.059450496Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:41:18 addons-463600 dockerd[1432]: time="2024-08-07T17:41:18.059805800Z" level=info msg="ignoring event" container=3e945c7d1c5e067baf60d14e498ca9cd8aa1ab75db827147d84087d106e40199 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:41:18 addons-463600 dockerd[1438]: time="2024-08-07T17:41:18.199159929Z" level=warning msg="cleanup warnings time=\"2024-08-07T17:41:18Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Aug 07 17:41:18 addons-463600 dockerd[1438]: time="2024-08-07T17:41:18.199485333Z" level=warning msg="cleanup warnings time=\"2024-08-07T17:41:18Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Aug 07 17:41:18 addons-463600 dockerd[1432]: time="2024-08-07T17:41:18.514991921Z" level=info msg="ignoring event" container=035dbc9e1cad3088dadf459e6be07d44963ceb1aea1c7dd9c07adeec8bd62e89 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:41:18 addons-463600 dockerd[1438]: time="2024-08-07T17:41:18.517072045Z" level=info msg="shim disconnected" id=035dbc9e1cad3088dadf459e6be07d44963ceb1aea1c7dd9c07adeec8bd62e89 namespace=moby
	Aug 07 17:41:18 addons-463600 dockerd[1438]: time="2024-08-07T17:41:18.517158946Z" level=warning msg="cleaning up after shim disconnected" id=035dbc9e1cad3088dadf459e6be07d44963ceb1aea1c7dd9c07adeec8bd62e89 namespace=moby
	Aug 07 17:41:18 addons-463600 dockerd[1438]: time="2024-08-07T17:41:18.517170646Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:41:18 addons-463600 dockerd[1438]: time="2024-08-07T17:41:18.565132507Z" level=info msg="shim disconnected" id=6490b5c3883e8007ba4f5cb70282af8df3589fab2c02c2f5f2e485a770320976 namespace=moby
	Aug 07 17:41:18 addons-463600 dockerd[1438]: time="2024-08-07T17:41:18.565375610Z" level=warning msg="cleaning up after shim disconnected" id=6490b5c3883e8007ba4f5cb70282af8df3589fab2c02c2f5f2e485a770320976 namespace=moby
	Aug 07 17:41:18 addons-463600 dockerd[1438]: time="2024-08-07T17:41:18.565398110Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:41:18 addons-463600 dockerd[1432]: time="2024-08-07T17:41:18.567963740Z" level=info msg="ignoring event" container=6490b5c3883e8007ba4f5cb70282af8df3589fab2c02c2f5f2e485a770320976 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:41:27 addons-463600 dockerd[1432]: time="2024-08-07T17:41:27.042638395Z" level=info msg="ignoring event" container=c8bc190a7c1e8816f7a531d45eafaaf21775efd07fde5c14553bc0c955cd5a07 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:41:27 addons-463600 dockerd[1438]: time="2024-08-07T17:41:27.043391504Z" level=info msg="shim disconnected" id=c8bc190a7c1e8816f7a531d45eafaaf21775efd07fde5c14553bc0c955cd5a07 namespace=moby
	Aug 07 17:41:27 addons-463600 dockerd[1438]: time="2024-08-07T17:41:27.043893510Z" level=warning msg="cleaning up after shim disconnected" id=c8bc190a7c1e8816f7a531d45eafaaf21775efd07fde5c14553bc0c955cd5a07 namespace=moby
	Aug 07 17:41:27 addons-463600 dockerd[1438]: time="2024-08-07T17:41:27.043977911Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:41:27 addons-463600 dockerd[1432]: time="2024-08-07T17:41:27.218523069Z" level=info msg="ignoring event" container=5d69124702fc50eaee1577c5b661bb7847deb531dd7eab7c6a124bb771ebe73c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:41:27 addons-463600 dockerd[1438]: time="2024-08-07T17:41:27.219289678Z" level=info msg="shim disconnected" id=5d69124702fc50eaee1577c5b661bb7847deb531dd7eab7c6a124bb771ebe73c namespace=moby
	Aug 07 17:41:27 addons-463600 dockerd[1438]: time="2024-08-07T17:41:27.219531781Z" level=warning msg="cleaning up after shim disconnected" id=5d69124702fc50eaee1577c5b661bb7847deb531dd7eab7c6a124bb771ebe73c namespace=moby
	Aug 07 17:41:27 addons-463600 dockerd[1438]: time="2024-08-07T17:41:27.219643683Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                         ATTEMPT             POD ID              POD
	42f18feb36c20       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                                  25 seconds ago      Running             hello-world-app              0                   f8dc6f38a12e4       hello-world-app-6778b5fc9f-nwbtf
	00e85732b9f69       nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9                                                47 seconds ago      Running             nginx                        0                   0a7e2c4e45d06       nginx
	835fc7699dbcf       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          2 minutes ago       Running             busybox                      0                   f746058a47282       busybox
	ea7b07a97a03f       registry.k8s.io/ingress-nginx/controller@sha256:e6439a12b52076965928e83b7b56aae6731231677b01e81818bce7fa5c60161a             4 minutes ago       Running             controller                   0                   1c72f85ab2c37       ingress-nginx-controller-6d9bd977d4-cpqjp
	90eafbae528f0       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:36d05b4077fb8e3d13663702fa337f124675ba8667cbd949c03a8e8ea6fa4366   4 minutes ago       Exited              patch                        0                   ef7776463dd5c       ingress-nginx-admission-patch-vj42w
	01061e177f39b       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280      4 minutes ago       Running             volume-snapshot-controller   0                   19d6821fdf968       snapshot-controller-745499f584-rthlj
	06fb97a0f1b68       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280      4 minutes ago       Running             volume-snapshot-controller   0                   cf9a00c4405cd       snapshot-controller-745499f584-qjjhm
	e7ff5327536a9       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:36d05b4077fb8e3d13663702fa337f124675ba8667cbd949c03a8e8ea6fa4366   4 minutes ago       Exited              create                       0                   9375bed7061c8       ingress-nginx-admission-create-sw9g4
	6eceebb6655aa       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                       5 minutes ago       Running             local-path-provisioner       0                   dc69ad1ff88ea       local-path-provisioner-8d985888d-wbr6w
	2e7bca74116da       marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624                                        5 minutes ago       Running             yakd                         0                   8895d7d22607e       yakd-dashboard-799879c74f-dfr7k
	8a5dfb9f4c6ef       gcr.io/cloud-spanner-emulator/emulator@sha256:ea3a9e70a98bf648952401e964c5403d93e980837acf924288df19e0077ae7fb               5 minutes ago       Running             cloud-spanner-emulator       0                   cd3c951d96775       cloud-spanner-emulator-5455fb9b69-jb57p
	9b54084a89445       nvcr.io/nvidia/k8s-device-plugin@sha256:89612c7851300ddeed218b9df0dcb33bbb8495282aa17c554038e52387ce7f1e                     5 minutes ago       Running             nvidia-device-plugin-ctr     0                   1a9793450c11c       nvidia-device-plugin-daemonset-48k52
	84e60bb6a86e0       6e38f40d628db                                                                                                                6 minutes ago       Running             storage-provisioner          0                   ba8d5e4a2282a       storage-provisioner
	f2787113039c7       cbb01a7bd410d                                                                                                                7 minutes ago       Running             coredns                      0                   254357d6abe63       coredns-7db6d8ff4d-2twmx
	7bb711762332d       55bb025d2cfa5                                                                                                                7 minutes ago       Running             kube-proxy                   0                   b9a3dd92eaaea       kube-proxy-2jg44
	4ea1e951b7e30       76932a3b37d7e                                                                                                                7 minutes ago       Running             kube-controller-manager      0                   4590fbf9bc8ff       kube-controller-manager-addons-463600
	547f5618de506       3edc18e7b7672                                                                                                                7 minutes ago       Running             kube-scheduler               0                   33ef36e021157       kube-scheduler-addons-463600
	5803394a0e3f4       1f6d574d502f3                                                                                                                7 minutes ago       Running             kube-apiserver               0                   982af61cc7bbb       kube-apiserver-addons-463600
	9b97310a6ae18       3861cfcd7c04c                                                                                                                7 minutes ago       Running             etcd                         0                   2180d5caa3483       etcd-addons-463600
	
	
	==> controller_ingress [ea7b07a97a03] <==
	I0807 17:40:40.373233       7 main.go:107] "successfully validated configuration, accepting" ingress="default/nginx-ingress"
	I0807 17:40:40.388465       7 store.go:440] "Found valid IngressClass" ingress="default/nginx-ingress" ingressclass="nginx"
	I0807 17:40:40.391469       7 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx-ingress", UID:"b8caa441-e5b2-4d66-943b-11b736c36cf6", APIVersion:"networking.k8s.io/v1", ResourceVersion:"2164", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	W0807 17:40:42.973367       7 controller.go:1216] Service "default/nginx" does not have any active Endpoint.
	I0807 17:40:42.973601       7 controller.go:193] "Configuration changes detected, backend reload required"
	I0807 17:40:43.281035       7 controller.go:213] "Backend successfully reloaded"
	I0807 17:40:43.289261       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-6d9bd977d4-cpqjp", UID:"59b03c0e-bfec-47cc-b1ed-b67367c7e49e", APIVersion:"v1", ResourceVersion:"759", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W0807 17:40:46.306332       7 controller.go:1216] Service "default/nginx" does not have any active Endpoint.
	W0807 17:41:06.652024       7 controller.go:1110] Error obtaining Endpoints for Service "kube-system/hello-world-app": no object matching key "kube-system/hello-world-app" in local store
	I0807 17:41:06.706298       7 admission.go:149] processed ingress via admission controller {testedIngressLength:2 testedIngressTime:0.055s renderingIngressLength:2 renderingIngressTime:0.002s admissionTime:0.057s testedConfigurationSize:26.2kB}
	I0807 17:41:06.706334       7 main.go:107] "successfully validated configuration, accepting" ingress="kube-system/example-ingress"
	I0807 17:41:06.730340       7 store.go:440] "Found valid IngressClass" ingress="kube-system/example-ingress" ingressclass="nginx"
	I0807 17:41:06.731665       7 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"kube-system", Name:"example-ingress", UID:"3e089b09-2848-4f91-97cb-a7e5fdb62037", APIVersion:"networking.k8s.io/v1", ResourceVersion:"2300", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	W0807 17:41:06.737743       7 controller.go:1110] Error obtaining Endpoints for Service "kube-system/hello-world-app": no object matching key "kube-system/hello-world-app" in local store
	I0807 17:41:06.738111       7 controller.go:193] "Configuration changes detected, backend reload required"
	I0807 17:41:06.825424       7 controller.go:213] "Backend successfully reloaded"
	I0807 17:41:06.826249       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-6d9bd977d4-cpqjp", UID:"59b03c0e-bfec-47cc-b1ed-b67367c7e49e", APIVersion:"v1", ResourceVersion:"759", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	I0807 17:41:10.072438       7 controller.go:193] "Configuration changes detected, backend reload required"
	I0807 17:41:10.157927       7 controller.go:213] "Backend successfully reloaded"
	I0807 17:41:10.159889       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-6d9bd977d4-cpqjp", UID:"59b03c0e-bfec-47cc-b1ed-b67367c7e49e", APIVersion:"v1", ResourceVersion:"759", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	I0807 17:41:32.228781       7 status.go:304] "updating Ingress status" namespace="default" ingress="nginx-ingress" currentValue=null newValue=[{"ip":"172.28.235.128"}]
	I0807 17:41:32.229111       7 status.go:304] "updating Ingress status" namespace="kube-system" ingress="example-ingress" currentValue=null newValue=[{"ip":"172.28.235.128"}]
	I0807 17:41:32.240487       7 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx-ingress", UID:"b8caa441-e5b2-4d66-943b-11b736c36cf6", APIVersion:"networking.k8s.io/v1", ResourceVersion:"2468", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	I0807 17:41:32.247996       7 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"kube-system", Name:"example-ingress", UID:"3e089b09-2848-4f91-97cb-a7e5fdb62037", APIVersion:"networking.k8s.io/v1", ResourceVersion:"2469", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	10.244.0.1 - - [07/Aug/2024:17:41:06 +0000] "GET / HTTP/1.1" 200 615 "-" "curl/8.5.0" 80 0.004 [default-nginx-80] [] 10.244.0.34:80 615 0.004 200 993e90ebf35cb22d5419cdbf750a6651
	
	
	==> coredns [f2787113039c] <==
	[INFO] 10.244.0.22:58603 - 36589 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.0000933s
	[INFO] 10.244.0.22:58603 - 25092 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000179802s
	[INFO] 10.244.0.22:58603 - 50988 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000092601s
	[INFO] 10.244.0.22:58603 - 52264 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000309704s
	[INFO] 10.244.0.22:50874 - 27921 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000286503s
	[INFO] 10.244.0.22:50874 - 13662 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000179602s
	[INFO] 10.244.0.22:50874 - 22835 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000168801s
	[INFO] 10.244.0.22:50874 - 48796 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000159502s
	[INFO] 10.244.0.22:50874 - 48539 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000102501s
	[INFO] 10.244.0.22:50874 - 20969 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000176702s
	[INFO] 10.244.0.22:50874 - 58310 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000079901s
	[INFO] 10.244.0.22:35321 - 53894 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000774508s
	[INFO] 10.244.0.22:54858 - 25479 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000065101s
	[INFO] 10.244.0.22:35321 - 13326 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.0000778s
	[INFO] 10.244.0.22:54858 - 20413 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000073201s
	[INFO] 10.244.0.22:54858 - 20175 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000100901s
	[INFO] 10.244.0.22:54858 - 27336 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000050301s
	[INFO] 10.244.0.22:35321 - 59651 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000144602s
	[INFO] 10.244.0.22:54858 - 49475 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.0000521s
	[INFO] 10.244.0.22:54858 - 16636 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000117101s
	[INFO] 10.244.0.22:54858 - 1921 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.0000514s
	[INFO] 10.244.0.22:35321 - 28900 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000058801s
	[INFO] 10.244.0.22:35321 - 11034 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000089301s
	[INFO] 10.244.0.22:35321 - 45927 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000061s
	[INFO] 10.244.0.22:35321 - 48558 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.0000612s
	
	
	==> describe nodes <==
	Name:               addons-463600
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-463600
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e
	                    minikube.k8s.io/name=addons-463600
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_07T17_34_03_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-463600
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 07 Aug 2024 17:33:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-463600
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 07 Aug 2024 17:41:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 07 Aug 2024 17:41:12 +0000   Wed, 07 Aug 2024 17:33:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 07 Aug 2024 17:41:12 +0000   Wed, 07 Aug 2024 17:33:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 07 Aug 2024 17:41:12 +0000   Wed, 07 Aug 2024 17:33:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 07 Aug 2024 17:41:12 +0000   Wed, 07 Aug 2024 17:34:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.28.235.128
	  Hostname:    addons-463600
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	System Info:
	  Machine ID:                 07788f198d0f4884929b4c0f54c00b23
	  System UUID:                1afa53ca-c404-3548-9917-aba97fbc0a60
	  Boot ID:                    5ce1e696-fdc4-4611-8d78-bae4de347f64
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (17 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m7s
	  default                     cloud-spanner-emulator-5455fb9b69-jb57p      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m51s
	  default                     hello-world-app-6778b5fc9f-nwbtf             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28s
	  default                     nginx                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         54s
	  ingress-nginx               ingress-nginx-controller-6d9bd977d4-cpqjp    100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (2%!)(MISSING)        0 (0%!)(MISSING)         6m43s
	  kube-system                 coredns-7db6d8ff4d-2twmx                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     7m17s
	  kube-system                 etcd-addons-463600                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         7m33s
	  kube-system                 kube-apiserver-addons-463600                 250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m32s
	  kube-system                 kube-controller-manager-addons-463600        200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m31s
	  kube-system                 kube-proxy-2jg44                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m19s
	  kube-system                 kube-scheduler-addons-463600                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m32s
	  kube-system                 nvidia-device-plugin-daemonset-48k52         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m52s
	  kube-system                 snapshot-controller-745499f584-qjjhm         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m44s
	  kube-system                 snapshot-controller-745499f584-rthlj         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m44s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m50s
	  local-path-storage          local-path-provisioner-8d985888d-wbr6w       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m45s
	  yakd-dashboard              yakd-dashboard-799879c74f-dfr7k              0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     6m46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             388Mi (10%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m8s                   kube-proxy       
	  Normal  Starting                 7m40s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m40s (x8 over 7m40s)  kubelet          Node addons-463600 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m40s (x8 over 7m40s)  kubelet          Node addons-463600 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m40s (x7 over 7m40s)  kubelet          Node addons-463600 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m40s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 7m32s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m31s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m31s                  kubelet          Node addons-463600 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m31s                  kubelet          Node addons-463600 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m31s                  kubelet          Node addons-463600 status is now: NodeHasSufficientPID
	  Normal  NodeReady                7m27s                  kubelet          Node addons-463600 status is now: NodeReady
	  Normal  RegisteredNode           7m19s                  node-controller  Node addons-463600 event: Registered Node addons-463600 in Controller
	
	
	==> dmesg <==
	[ +27.593939] kauditd_printk_skb: 29 callbacks suppressed
	[ +10.204449] kauditd_printk_skb: 10 callbacks suppressed
	[  +8.500292] kauditd_printk_skb: 26 callbacks suppressed
	[Aug 7 17:37] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.310546] kauditd_printk_skb: 15 callbacks suppressed
	[ +20.157925] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.001012] kauditd_printk_skb: 36 callbacks suppressed
	[  +5.581913] kauditd_printk_skb: 6 callbacks suppressed
	[Aug 7 17:38] kauditd_printk_skb: 40 callbacks suppressed
	[  +6.947751] kauditd_printk_skb: 24 callbacks suppressed
	[ +23.865000] kauditd_printk_skb: 9 callbacks suppressed
	[ +14.830999] kauditd_printk_skb: 2 callbacks suppressed
	[Aug 7 17:39] kauditd_printk_skb: 29 callbacks suppressed
	[ +11.197908] kauditd_printk_skb: 22 callbacks suppressed
	[ +19.879740] kauditd_printk_skb: 7 callbacks suppressed
	[Aug 7 17:40] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.789201] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.102113] kauditd_printk_skb: 25 callbacks suppressed
	[  +8.791225] kauditd_printk_skb: 56 callbacks suppressed
	[  +6.958352] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.077945] kauditd_printk_skb: 34 callbacks suppressed
	[  +6.454255] kauditd_printk_skb: 11 callbacks suppressed
	[  +9.745917] kauditd_printk_skb: 27 callbacks suppressed
	[Aug 7 17:41] kauditd_printk_skb: 36 callbacks suppressed
	[ +15.463792] kauditd_printk_skb: 25 callbacks suppressed
	
	
	==> etcd [9b97310a6ae1] <==
	{"level":"warn","ts":"2024-08-07T17:37:44.269464Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-07T17:37:43.82803Z","time spent":"441.424949ms","remote":"127.0.0.1:59632","response type":"/etcdserverpb.KV/Range","request count":0,"request size":70,"response count":1,"response size":808,"request content":"key:\"/registry/events/gcp-auth/gcp-auth-5db96cd9b4-qxgt4.17e982e784c86922\" "}
	{"level":"warn","ts":"2024-08-07T17:37:44.269933Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"245.422328ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:86478"}
	{"level":"info","ts":"2024-08-07T17:37:44.270018Z","caller":"traceutil/trace.go:171","msg":"trace[1375611272] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1446; }","duration":"245.480029ms","start":"2024-08-07T17:37:44.024474Z","end":"2024-08-07T17:37:44.269954Z","steps":["trace[1375611272] 'range keys from in-memory index tree'  (duration: 244.996124ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-07T17:37:44.270212Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"181.288467ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:1 size:4367"}
	{"level":"info","ts":"2024-08-07T17:37:44.270234Z","caller":"traceutil/trace.go:171","msg":"trace[1866035687] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:1; response_revision:1446; }","duration":"181.345469ms","start":"2024-08-07T17:37:44.088882Z","end":"2024-08-07T17:37:44.270228Z","steps":["trace[1866035687] 'range keys from in-memory index tree'  (duration: 181.159466ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-07T17:38:47.364957Z","caller":"traceutil/trace.go:171","msg":"trace[1170845499] transaction","detail":"{read_only:false; response_revision:1644; number_of_response:1; }","duration":"173.355617ms","start":"2024-08-07T17:38:47.191576Z","end":"2024-08-07T17:38:47.364932Z","steps":["trace[1170845499] 'process raft request'  (duration: 172.851711ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-07T17:38:47.672407Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"212.056545ms","expected-duration":"100ms","prefix":"","request":"header:<ID:5238127467953139668 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" mod_revision:1634 > success:<request_put:<key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" value_size:426 >> failure:<request_range:<key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-08-07T17:38:47.672484Z","caller":"traceutil/trace.go:171","msg":"trace[1413965122] linearizableReadLoop","detail":"{readStateIndex:1729; appliedIndex:1728; }","duration":"250.814273ms","start":"2024-08-07T17:38:47.421658Z","end":"2024-08-07T17:38:47.672473Z","steps":["trace[1413965122] 'read index received'  (duration: 38.425325ms)","trace[1413965122] 'applied index is now lower than readState.Index'  (duration: 212.388148ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-07T17:38:47.672608Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"250.945374ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/my-volcano/\" range_end:\"/registry/pods/my-volcano0\" ","response":"range_response_count:1 size:3735"}
	{"level":"info","ts":"2024-08-07T17:38:47.67263Z","caller":"traceutil/trace.go:171","msg":"trace[1063403274] range","detail":"{range_begin:/registry/pods/my-volcano/; range_end:/registry/pods/my-volcano0; response_count:1; response_revision:1645; }","duration":"250.988674ms","start":"2024-08-07T17:38:47.421635Z","end":"2024-08-07T17:38:47.672623Z","steps":["trace[1063403274] 'agreement among raft nodes before linearized reading'  (duration: 250.869373ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-07T17:38:47.672975Z","caller":"traceutil/trace.go:171","msg":"trace[471209481] transaction","detail":"{read_only:false; response_revision:1645; number_of_response:1; }","duration":"299.051806ms","start":"2024-08-07T17:38:47.373893Z","end":"2024-08-07T17:38:47.672945Z","steps":["trace[471209481] 'process raft request'  (duration: 86.162852ms)","trace[471209481] 'compare'  (duration: 211.911343ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-07T17:38:53.662722Z","caller":"traceutil/trace.go:171","msg":"trace[2115857267] linearizableReadLoop","detail":"{readStateIndex:1739; appliedIndex:1738; }","duration":"235.935ms","start":"2024-08-07T17:38:53.426769Z","end":"2024-08-07T17:38:53.662704Z","steps":["trace[2115857267] 'read index received'  (duration: 235.725998ms)","trace[2115857267] 'applied index is now lower than readState.Index'  (duration: 208.502µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-07T17:38:53.66339Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"236.679609ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/my-volcano/\" range_end:\"/registry/pods/my-volcano0\" ","response":"range_response_count:1 size:3735"}
	{"level":"info","ts":"2024-08-07T17:38:53.663441Z","caller":"traceutil/trace.go:171","msg":"trace[1250079786] range","detail":"{range_begin:/registry/pods/my-volcano/; range_end:/registry/pods/my-volcano0; response_count:1; response_revision:1654; }","duration":"236.759709ms","start":"2024-08-07T17:38:53.426673Z","end":"2024-08-07T17:38:53.663432Z","steps":["trace[1250079786] 'agreement among raft nodes before linearized reading'  (duration: 236.218703ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-07T17:38:53.664112Z","caller":"traceutil/trace.go:171","msg":"trace[1236294006] transaction","detail":"{read_only:false; response_revision:1654; number_of_response:1; }","duration":"243.025279ms","start":"2024-08-07T17:38:53.421071Z","end":"2024-08-07T17:38:53.664096Z","steps":["trace[1236294006] 'process raft request'  (duration: 241.463662ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-07T17:40:19.673471Z","caller":"traceutil/trace.go:171","msg":"trace[1495491014] linearizableReadLoop","detail":"{readStateIndex:2141; appliedIndex:2140; }","duration":"294.292915ms","start":"2024-08-07T17:40:19.37915Z","end":"2024-08-07T17:40:19.673443Z","steps":["trace[1495491014] 'read index received'  (duration: 232.368133ms)","trace[1495491014] 'applied index is now lower than readState.Index'  (duration: 61.923882ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-07T17:40:19.674321Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"295.236827ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:4 size:11108"}
	{"level":"info","ts":"2024-08-07T17:40:19.674409Z","caller":"traceutil/trace.go:171","msg":"trace[1089745672] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:4; response_revision:2031; }","duration":"295.360628ms","start":"2024-08-07T17:40:19.379038Z","end":"2024-08-07T17:40:19.674398Z","steps":["trace[1089745672] 'agreement among raft nodes before linearized reading'  (duration: 295.140825ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-07T17:40:19.674938Z","caller":"traceutil/trace.go:171","msg":"trace[98620668] transaction","detail":"{read_only:false; response_revision:2031; number_of_response:1; }","duration":"320.966452ms","start":"2024-08-07T17:40:19.35396Z","end":"2024-08-07T17:40:19.674927Z","steps":["trace[98620668] 'process raft request'  (duration: 257.606452ms)","trace[98620668] 'compare'  (duration: 61.569377ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-07T17:40:19.675293Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-07T17:40:19.353805Z","time spent":"321.176654ms","remote":"127.0.0.1:59746","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1653,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/default/task-pv-pod\" mod_revision:0 > success:<request_put:<key:\"/registry/pods/default/task-pv-pod\" value_size:1611 >> failure:<>"}
	{"level":"warn","ts":"2024-08-07T17:40:19.676182Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"265.482551ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ingressclasses/\" range_end:\"/registry/ingressclasses0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"warn","ts":"2024-08-07T17:40:19.674361Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"293.601406ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourcequotas/\" range_end:\"/registry/resourcequotas0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-07T17:40:19.676538Z","caller":"traceutil/trace.go:171","msg":"trace[1476663764] range","detail":"{range_begin:/registry/resourcequotas/; range_end:/registry/resourcequotas0; response_count:0; response_revision:2031; }","duration":"295.777834ms","start":"2024-08-07T17:40:19.380673Z","end":"2024-08-07T17:40:19.676451Z","steps":["trace[1476663764] 'agreement among raft nodes before linearized reading'  (duration: 293.579006ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-07T17:40:19.676236Z","caller":"traceutil/trace.go:171","msg":"trace[620639800] range","detail":"{range_begin:/registry/ingressclasses/; range_end:/registry/ingressclasses0; response_count:0; response_revision:2031; }","duration":"265.567552ms","start":"2024-08-07T17:40:19.41066Z","end":"2024-08-07T17:40:19.676228Z","steps":["trace[620639800] 'agreement among raft nodes before linearized reading'  (duration: 265.471251ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-07T17:40:19.79205Z","caller":"traceutil/trace.go:171","msg":"trace[928546361] transaction","detail":"{read_only:false; response_revision:2032; number_of_response:1; }","duration":"106.624546ms","start":"2024-08-07T17:40:19.685404Z","end":"2024-08-07T17:40:19.792029Z","steps":["trace[928546361] 'process raft request'  (duration: 95.138801ms)","trace[928546361] 'compare'  (duration: 11.057039ms)"],"step_count":2}
	
	
	==> kernel <==
	 17:41:35 up 9 min,  0 users,  load average: 1.06, 2.03, 1.30
	Linux addons-463600 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [5803394a0e3f] <==
	I0807 17:39:18.167436       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0807 17:39:18.199687       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0807 17:39:18.218508       1 handler.go:286] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
	E0807 17:39:18.269567       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"volcano-scheduler\" not found]"
	W0807 17:39:18.512010       1 cacher.go:168] Terminating all watchers from cacher commands.bus.volcano.sh
	I0807 17:39:18.689619       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0807 17:39:18.870230       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0807 17:39:19.166790       1 cacher.go:168] Terminating all watchers from cacher jobs.batch.volcano.sh
	I0807 17:39:19.172905       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0807 17:39:19.211323       1 cacher.go:168] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0807 17:39:19.336204       1 cacher.go:168] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0807 17:39:19.375552       1 cacher.go:168] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0807 17:39:20.173888       1 cacher.go:168] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0807 17:39:20.368154       1 cacher.go:168] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	E0807 17:39:36.967254       1 conn.go:339] Error on socket receive: read tcp 172.28.235.128:8443->172.28.224.1:49635: use of closed network connection
	E0807 17:39:37.449933       1 conn.go:339] Error on socket receive: read tcp 172.28.235.128:8443->172.28.224.1:49638: use of closed network connection
	E0807 17:39:37.749655       1 conn.go:339] Error on socket receive: read tcp 172.28.235.128:8443->172.28.224.1:49640: use of closed network connection
	E0807 17:40:20.376322       1 upgradeaware.go:427] Error proxying data from client to backend: read tcp 172.28.235.128:8443->10.244.0.30:35708: read: connection reset by peer
	I0807 17:40:27.961131       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0807 17:40:40.374996       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0807 17:40:40.648244       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0807 17:40:40.861813       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.96.71.11"}
	I0807 17:41:06.304701       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0807 17:41:06.935329       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.100.202.192"}
	W0807 17:41:07.383359       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	
	
	==> kube-controller-manager [4ea1e951b7e3] <==
	I0807 17:41:10.524032       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-698f998955" duration="4.8µs"
	I0807 17:41:10.871572       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="72.298027ms"
	I0807 17:41:10.871734       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="47.5µs"
	W0807 17:41:13.084018       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0807 17:41:13.084094       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0807 17:41:14.824638       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0807 17:41:14.824762       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0807 17:41:15.985060       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0807 17:41:15.985097       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0807 17:41:16.424701       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0807 17:41:16.424867       1 shared_informer.go:320] Caches are synced for resource quota
	I0807 17:41:16.561812       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0807 17:41:16.562519       1 shared_informer.go:320] Caches are synced for garbage collector
	I0807 17:41:16.817097       1 namespace_controller.go:182] "Namespace has been deleted" logger="namespace-controller" namespace="gadget"
	I0807 17:41:17.172409       1 stateful_set.go:460] "StatefulSet has been deleted" logger="statefulset-controller" key="kube-system/csi-hostpath-attacher"
	I0807 17:41:17.404709       1 stateful_set.go:460] "StatefulSet has been deleted" logger="statefulset-controller" key="kube-system/csi-hostpath-resizer"
	W0807 17:41:25.308732       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0807 17:41:25.309989       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0807 17:41:26.799010       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0807 17:41:26.799052       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0807 17:41:30.048409       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0807 17:41:30.048451       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0807 17:41:34.527992       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/cloud-spanner-emulator-5455fb9b69" duration="7.4µs"
	W0807 17:41:35.116377       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0807 17:41:35.116465       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [7bb711762332] <==
	I0807 17:34:25.753330       1 server_linux.go:69] "Using iptables proxy"
	I0807 17:34:25.879321       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.28.235.128"]
	I0807 17:34:26.586415       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0807 17:34:26.586506       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0807 17:34:26.586539       1 server_linux.go:165] "Using iptables Proxier"
	I0807 17:34:26.694356       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0807 17:34:26.695041       1 server.go:872] "Version info" version="v1.30.3"
	I0807 17:34:26.695067       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0807 17:34:26.698292       1 config.go:192] "Starting service config controller"
	I0807 17:34:26.698318       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0807 17:34:26.698378       1 config.go:101] "Starting endpoint slice config controller"
	I0807 17:34:26.698389       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0807 17:34:26.715372       1 config.go:319] "Starting node config controller"
	I0807 17:34:26.790575       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0807 17:34:26.790892       1 shared_informer.go:320] Caches are synced for node config
	I0807 17:34:26.806876       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0807 17:34:26.807100       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [547f5618de50] <==
	W0807 17:34:00.100702       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0807 17:34:00.101002       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0807 17:34:00.124651       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0807 17:34:00.124771       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0807 17:34:00.250305       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0807 17:34:00.250868       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0807 17:34:00.280333       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0807 17:34:00.280981       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0807 17:34:00.320927       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0807 17:34:00.321399       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0807 17:34:00.328080       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0807 17:34:00.328123       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0807 17:34:00.359385       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0807 17:34:00.359582       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0807 17:34:00.382089       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0807 17:34:00.382115       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0807 17:34:00.555547       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0807 17:34:00.556333       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0807 17:34:00.637610       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0807 17:34:00.637684       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0807 17:34:00.701067       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0807 17:34:00.701099       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0807 17:34:00.733089       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0807 17:34:00.733153       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0807 17:34:03.539229       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 07 17:41:19 addons-463600 kubelet[2283]: I0807 17:41:19.819580    2283 scope.go:117] "RemoveContainer" containerID="94913441df5d69a9eec0df8c005a26d5a6bd1cad84e1653091326840b2908bab"
	Aug 07 17:41:19 addons-463600 kubelet[2283]: I0807 17:41:19.821473    2283 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"94913441df5d69a9eec0df8c005a26d5a6bd1cad84e1653091326840b2908bab"} err="failed to get container status \"94913441df5d69a9eec0df8c005a26d5a6bd1cad84e1653091326840b2908bab\": rpc error: code = Unknown desc = Error response from daemon: No such container: 94913441df5d69a9eec0df8c005a26d5a6bd1cad84e1653091326840b2908bab"
	Aug 07 17:41:19 addons-463600 kubelet[2283]: I0807 17:41:19.821507    2283 scope.go:117] "RemoveContainer" containerID="7ee142acb992ee5139f9520e71833b340238488198914e5b4a7863fd2f9077c9"
	Aug 07 17:41:19 addons-463600 kubelet[2283]: I0807 17:41:19.822986    2283 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"7ee142acb992ee5139f9520e71833b340238488198914e5b4a7863fd2f9077c9"} err="failed to get container status \"7ee142acb992ee5139f9520e71833b340238488198914e5b4a7863fd2f9077c9\": rpc error: code = Unknown desc = Error response from daemon: No such container: 7ee142acb992ee5139f9520e71833b340238488198914e5b4a7863fd2f9077c9"
	Aug 07 17:41:19 addons-463600 kubelet[2283]: I0807 17:41:19.823086    2283 scope.go:117] "RemoveContainer" containerID="adbb531b94e6faca273c11917ab93d7b73bed2c798d68cdd859b00a2fcfe5a0d"
	Aug 07 17:41:19 addons-463600 kubelet[2283]: I0807 17:41:19.825114    2283 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"adbb531b94e6faca273c11917ab93d7b73bed2c798d68cdd859b00a2fcfe5a0d"} err="failed to get container status \"adbb531b94e6faca273c11917ab93d7b73bed2c798d68cdd859b00a2fcfe5a0d\": rpc error: code = Unknown desc = Error response from daemon: No such container: adbb531b94e6faca273c11917ab93d7b73bed2c798d68cdd859b00a2fcfe5a0d"
	Aug 07 17:41:19 addons-463600 kubelet[2283]: I0807 17:41:19.825146    2283 scope.go:117] "RemoveContainer" containerID="cebd7b64d47976402a41380f5b3e8facff5e34c2c50655e7b1efcda94e8ce6ca"
	Aug 07 17:41:19 addons-463600 kubelet[2283]: I0807 17:41:19.827736    2283 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"cebd7b64d47976402a41380f5b3e8facff5e34c2c50655e7b1efcda94e8ce6ca"} err="failed to get container status \"cebd7b64d47976402a41380f5b3e8facff5e34c2c50655e7b1efcda94e8ce6ca\": rpc error: code = Unknown desc = Error response from daemon: No such container: cebd7b64d47976402a41380f5b3e8facff5e34c2c50655e7b1efcda94e8ce6ca"
	Aug 07 17:41:19 addons-463600 kubelet[2283]: I0807 17:41:19.827812    2283 scope.go:117] "RemoveContainer" containerID="3e945c7d1c5e067baf60d14e498ca9cd8aa1ab75db827147d84087d106e40199"
	Aug 07 17:41:19 addons-463600 kubelet[2283]: I0807 17:41:19.879272    2283 scope.go:117] "RemoveContainer" containerID="3e945c7d1c5e067baf60d14e498ca9cd8aa1ab75db827147d84087d106e40199"
	Aug 07 17:41:19 addons-463600 kubelet[2283]: E0807 17:41:19.881019    2283 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 3e945c7d1c5e067baf60d14e498ca9cd8aa1ab75db827147d84087d106e40199" containerID="3e945c7d1c5e067baf60d14e498ca9cd8aa1ab75db827147d84087d106e40199"
	Aug 07 17:41:19 addons-463600 kubelet[2283]: I0807 17:41:19.881284    2283 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"3e945c7d1c5e067baf60d14e498ca9cd8aa1ab75db827147d84087d106e40199"} err="failed to get container status \"3e945c7d1c5e067baf60d14e498ca9cd8aa1ab75db827147d84087d106e40199\": rpc error: code = Unknown desc = Error response from daemon: No such container: 3e945c7d1c5e067baf60d14e498ca9cd8aa1ab75db827147d84087d106e40199"
	Aug 07 17:41:21 addons-463600 kubelet[2283]: I0807 17:41:21.033104    2283 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9296c8c0-2a3f-44b0-813e-43a5ec3f085e" path="/var/lib/kubelet/pods/9296c8c0-2a3f-44b0-813e-43a5ec3f085e/volumes"
	Aug 07 17:41:21 addons-463600 kubelet[2283]: I0807 17:41:21.033979    2283 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dee83501-b532-4171-a9fc-d1c47c30d2b4" path="/var/lib/kubelet/pods/dee83501-b532-4171-a9fc-d1c47c30d2b4/volumes"
	Aug 07 17:41:27 addons-463600 kubelet[2283]: I0807 17:41:27.482247    2283 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzs9t\" (UniqueName: \"kubernetes.io/projected/d089c2b3-720f-41d3-aaff-4a6acd1186dc-kube-api-access-tzs9t\") pod \"d089c2b3-720f-41d3-aaff-4a6acd1186dc\" (UID: \"d089c2b3-720f-41d3-aaff-4a6acd1186dc\") "
	Aug 07 17:41:27 addons-463600 kubelet[2283]: I0807 17:41:27.493287    2283 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d089c2b3-720f-41d3-aaff-4a6acd1186dc-kube-api-access-tzs9t" (OuterVolumeSpecName: "kube-api-access-tzs9t") pod "d089c2b3-720f-41d3-aaff-4a6acd1186dc" (UID: "d089c2b3-720f-41d3-aaff-4a6acd1186dc"). InnerVolumeSpecName "kube-api-access-tzs9t". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 07 17:41:27 addons-463600 kubelet[2283]: I0807 17:41:27.582902    2283 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-tzs9t\" (UniqueName: \"kubernetes.io/projected/d089c2b3-720f-41d3-aaff-4a6acd1186dc-kube-api-access-tzs9t\") on node \"addons-463600\" DevicePath \"\""
	Aug 07 17:41:27 addons-463600 kubelet[2283]: I0807 17:41:27.685601    2283 scope.go:117] "RemoveContainer" containerID="c8bc190a7c1e8816f7a531d45eafaaf21775efd07fde5c14553bc0c955cd5a07"
	Aug 07 17:41:27 addons-463600 kubelet[2283]: I0807 17:41:27.754207    2283 scope.go:117] "RemoveContainer" containerID="c8bc190a7c1e8816f7a531d45eafaaf21775efd07fde5c14553bc0c955cd5a07"
	Aug 07 17:41:27 addons-463600 kubelet[2283]: E0807 17:41:27.756950    2283 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: c8bc190a7c1e8816f7a531d45eafaaf21775efd07fde5c14553bc0c955cd5a07" containerID="c8bc190a7c1e8816f7a531d45eafaaf21775efd07fde5c14553bc0c955cd5a07"
	Aug 07 17:41:27 addons-463600 kubelet[2283]: I0807 17:41:27.757315    2283 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"c8bc190a7c1e8816f7a531d45eafaaf21775efd07fde5c14553bc0c955cd5a07"} err="failed to get container status \"c8bc190a7c1e8816f7a531d45eafaaf21775efd07fde5c14553bc0c955cd5a07\": rpc error: code = Unknown desc = Error response from daemon: No such container: c8bc190a7c1e8816f7a531d45eafaaf21775efd07fde5c14553bc0c955cd5a07"
	Aug 07 17:41:29 addons-463600 kubelet[2283]: I0807 17:41:29.034614    2283 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d089c2b3-720f-41d3-aaff-4a6acd1186dc" path="/var/lib/kubelet/pods/d089c2b3-720f-41d3-aaff-4a6acd1186dc/volumes"
	Aug 07 17:41:35 addons-463600 kubelet[2283]: I0807 17:41:35.270961    2283 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxm89\" (UniqueName: \"kubernetes.io/projected/9f7fba2a-0949-4150-85f8-334acf987a7f-kube-api-access-kxm89\") pod \"9f7fba2a-0949-4150-85f8-334acf987a7f\" (UID: \"9f7fba2a-0949-4150-85f8-334acf987a7f\") "
	Aug 07 17:41:35 addons-463600 kubelet[2283]: I0807 17:41:35.287477    2283 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f7fba2a-0949-4150-85f8-334acf987a7f-kube-api-access-kxm89" (OuterVolumeSpecName: "kube-api-access-kxm89") pod "9f7fba2a-0949-4150-85f8-334acf987a7f" (UID: "9f7fba2a-0949-4150-85f8-334acf987a7f"). InnerVolumeSpecName "kube-api-access-kxm89". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 07 17:41:35 addons-463600 kubelet[2283]: I0807 17:41:35.375402    2283 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-kxm89\" (UniqueName: \"kubernetes.io/projected/9f7fba2a-0949-4150-85f8-334acf987a7f-kube-api-access-kxm89\") on node \"addons-463600\" DevicePath \"\""
	
	
	==> storage-provisioner [84e60bb6a86e] <==
	I0807 17:34:47.710230       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0807 17:34:47.734682       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0807 17:34:47.734864       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0807 17:34:47.768021       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0807 17:34:47.768239       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-463600_42625da1-0509-4636-ab92-3503a5c410a9!
	I0807 17:34:47.769562       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4d694a39-8738-4167-ac23-c125c9256c0b", APIVersion:"v1", ResourceVersion:"607", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-463600_42625da1-0509-4636-ab92-3503a5c410a9 became leader
	I0807 17:34:47.870666       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-463600_42625da1-0509-4636-ab92-3503a5c410a9!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0807 17:41:24.920377    7276 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-463600 -n addons-463600
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-463600 -n addons-463600: (14.1794875s)
helpers_test.go:261: (dbg) Run:  kubectl --context addons-463600 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (79.56s)

                                                
                                    
x
+
TestErrorSpam/setup (200.29s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-974300 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-974300 --driver=hyperv
error_spam_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-974300 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-974300 --driver=hyperv: (3m20.286515s)
error_spam_test.go:96: unexpected stderr: "W0807 17:43:50.823477   13980 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."
error_spam_test.go:96: unexpected stderr: "! Failing to connect to https://registry.k8s.io/ from inside the minikube VM"
error_spam_test.go:96: unexpected stderr: "* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/"
error_spam_test.go:110: minikube stdout:
* [nospam-974300] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4717 Build 19045.4717
- KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
- MINIKUBE_LOCATION=19389
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
* Using the hyperv driver based on user configuration
* Starting "nospam-974300" primary control-plane node in "nospam-974300" cluster
* Creating hyperv VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
* Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-974300" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
W0807 17:43:50.823477   13980 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
--- FAIL: TestErrorSpam/setup (200.29s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (35s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:731: link out/minikube-windows-amd64.exe out\kubectl.exe: Cannot create a file when that file already exists.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-100700 -n functional-100700
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-100700 -n functional-100700: (12.4668792s)
helpers_test.go:244: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-100700 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-100700 logs -n 25: (8.9255692s)
helpers_test.go:252: TestFunctional/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                            Args                             |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| pause   | nospam-974300 --log_dir                                     | nospam-974300     | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:48 UTC | 07 Aug 24 17:48 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-974300 |                   |                   |         |                     |                     |
	|         | pause                                                       |                   |                   |         |                     |                     |
	| unpause | nospam-974300 --log_dir                                     | nospam-974300     | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:48 UTC | 07 Aug 24 17:48 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-974300 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-974300 --log_dir                                     | nospam-974300     | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:48 UTC | 07 Aug 24 17:48 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-974300 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-974300 --log_dir                                     | nospam-974300     | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:48 UTC | 07 Aug 24 17:48 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-974300 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-974300 --log_dir                                     | nospam-974300     | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:48 UTC | 07 Aug 24 17:49 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-974300 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-974300 --log_dir                                     | nospam-974300     | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:49 UTC | 07 Aug 24 17:49 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-974300 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-974300 --log_dir                                     | nospam-974300     | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:49 UTC | 07 Aug 24 17:49 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-974300 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| delete  | -p nospam-974300                                            | nospam-974300     | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:49 UTC | 07 Aug 24 17:50 UTC |
	| start   | -p functional-100700                                        | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:50 UTC | 07 Aug 24 17:54 UTC |
	|         | --memory=4000                                               |                   |                   |         |                     |                     |
	|         | --apiserver-port=8441                                       |                   |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv                                  |                   |                   |         |                     |                     |
	| start   | -p functional-100700                                        | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:54 UTC | 07 Aug 24 17:56 UTC |
	|         | --alsologtostderr -v=8                                      |                   |                   |         |                     |                     |
	| cache   | functional-100700 cache add                                 | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:56 UTC | 07 Aug 24 17:56 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | functional-100700 cache add                                 | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:56 UTC | 07 Aug 24 17:56 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | functional-100700 cache add                                 | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:56 UTC | 07 Aug 24 17:56 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-100700 cache add                                 | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:57 UTC | 07 Aug 24 17:57 UTC |
	|         | minikube-local-cache-test:functional-100700                 |                   |                   |         |                     |                     |
	| cache   | functional-100700 cache delete                              | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:57 UTC | 07 Aug 24 17:57 UTC |
	|         | minikube-local-cache-test:functional-100700                 |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:57 UTC | 07 Aug 24 17:57 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | list                                                        | minikube          | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:57 UTC | 07 Aug 24 17:57 UTC |
	| ssh     | functional-100700 ssh sudo                                  | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:57 UTC | 07 Aug 24 17:57 UTC |
	|         | crictl images                                               |                   |                   |         |                     |                     |
	| ssh     | functional-100700                                           | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:57 UTC | 07 Aug 24 17:57 UTC |
	|         | ssh sudo docker rmi                                         |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| ssh     | functional-100700 ssh                                       | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:57 UTC |                     |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-100700 cache reload                              | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:57 UTC | 07 Aug 24 17:57 UTC |
	| ssh     | functional-100700 ssh                                       | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:57 UTC | 07 Aug 24 17:57 UTC |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:57 UTC | 07 Aug 24 17:57 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:57 UTC | 07 Aug 24 17:57 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| kubectl | functional-100700 kubectl --                                | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:57 UTC | 07 Aug 24 17:57 UTC |
	|         | --context functional-100700                                 |                   |                   |         |                     |                     |
	|         | get pods                                                    |                   |                   |         |                     |                     |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/07 17:54:21
	Running on machine: minikube6
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0807 17:54:21.558308    9640 out.go:291] Setting OutFile to fd 812 ...
	I0807 17:54:21.559022    9640 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 17:54:21.559022    9640 out.go:304] Setting ErrFile to fd 1020...
	I0807 17:54:21.559022    9640 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 17:54:21.582643    9640 out.go:298] Setting JSON to false
	I0807 17:54:21.586295    9640 start.go:129] hostinfo: {"hostname":"minikube6","uptime":315191,"bootTime":1722738070,"procs":190,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4717 Build 19045.4717","kernelVersion":"10.0.19045.4717 Build 19045.4717","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0807 17:54:21.586295    9640 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0807 17:54:21.592844    9640 out.go:177] * [functional-100700] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4717 Build 19045.4717
	I0807 17:54:21.596187    9640 notify.go:220] Checking for updates...
	I0807 17:54:21.596187    9640 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0807 17:54:21.601111    9640 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0807 17:54:21.605009    9640 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0807 17:54:21.608126    9640 out.go:177]   - MINIKUBE_LOCATION=19389
	I0807 17:54:21.611147    9640 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0807 17:54:21.614630    9640 config.go:182] Loaded profile config "functional-100700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 17:54:21.614630    9640 driver.go:392] Setting default libvirt URI to qemu:///system
	I0807 17:54:27.071106    9640 out.go:177] * Using the hyperv driver based on existing profile
	I0807 17:54:27.075162    9640 start.go:297] selected driver: hyperv
	I0807 17:54:27.075162    9640 start.go:901] validating driver "hyperv" against &{Name:functional-100700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.3 ClusterName:functional-100700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.235.211 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 17:54:27.075162    9640 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0807 17:54:27.126971    9640 cni.go:84] Creating CNI manager for ""
	I0807 17:54:27.126971    9640 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0807 17:54:27.127342    9640 start.go:340] cluster config:
	{Name:functional-100700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-100700 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.235.211 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 17:54:27.127743    9640 iso.go:125] acquiring lock: {Name:mk51465eaa337f49a286b30986b5f3d5f63e6787 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 17:54:27.133221    9640 out.go:177] * Starting "functional-100700" primary control-plane node in "functional-100700" cluster
	I0807 17:54:27.137148    9640 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0807 17:54:27.138185    9640 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0807 17:54:27.138185    9640 cache.go:56] Caching tarball of preloaded images
	I0807 17:54:27.138185    9640 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0807 17:54:27.138185    9640 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0807 17:54:27.138852    9640 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-100700\config.json ...
	I0807 17:54:27.141099    9640 start.go:360] acquireMachinesLock for functional-100700: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 17:54:27.141099    9640 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-100700"
	I0807 17:54:27.141099    9640 start.go:96] Skipping create...Using existing machine configuration
	I0807 17:54:27.141099    9640 fix.go:54] fixHost starting: 
	I0807 17:54:27.142270    9640 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:54:30.005456    9640 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:54:30.005456    9640 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:54:30.006299    9640 fix.go:112] recreateIfNeeded on functional-100700: state=Running err=<nil>
	W0807 17:54:30.006393    9640 fix.go:138] unexpected machine state, will restart: <nil>
	I0807 17:54:30.010488    9640 out.go:177] * Updating the running hyperv "functional-100700" VM ...
	I0807 17:54:30.016218    9640 machine.go:94] provisionDockerMachine start ...
	I0807 17:54:30.016218    9640 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:54:32.255201    9640 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:54:32.255201    9640 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:54:32.255955    9640 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:54:34.925437    9640 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:54:34.925437    9640 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:54:34.932415    9640 main.go:141] libmachine: Using SSH client type: native
	I0807 17:54:34.933285    9640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.235.211 22 <nil> <nil>}
	I0807 17:54:34.933285    9640 main.go:141] libmachine: About to run SSH command:
	hostname
	I0807 17:54:35.055467    9640 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-100700
	
	I0807 17:54:35.055555    9640 buildroot.go:166] provisioning hostname "functional-100700"
	I0807 17:54:35.055668    9640 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:54:37.286428    9640 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:54:37.286428    9640 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:54:37.286428    9640 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:54:39.894892    9640 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:54:39.894991    9640 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:54:39.900159    9640 main.go:141] libmachine: Using SSH client type: native
	I0807 17:54:39.900734    9640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.235.211 22 <nil> <nil>}
	I0807 17:54:39.900883    9640 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-100700 && echo "functional-100700" | sudo tee /etc/hostname
	I0807 17:54:40.051628    9640 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-100700
	
	I0807 17:54:40.051689    9640 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:54:42.242303    9640 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:54:42.242303    9640 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:54:42.242303    9640 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:54:44.932721    9640 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:54:44.933682    9640 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:54:44.940027    9640 main.go:141] libmachine: Using SSH client type: native
	I0807 17:54:44.940176    9640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.235.211 22 <nil> <nil>}
	I0807 17:54:44.940176    9640 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-100700' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-100700/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-100700' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0807 17:54:45.079048    9640 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0807 17:54:45.079147    9640 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0807 17:54:45.079147    9640 buildroot.go:174] setting up certificates
	I0807 17:54:45.079147    9640 provision.go:84] configureAuth start
	I0807 17:54:45.079147    9640 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:54:47.353905    9640 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:54:47.354412    9640 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:54:47.354412    9640 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:54:50.110407    9640 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:54:50.110407    9640 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:54:50.110705    9640 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:54:52.305159    9640 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:54:52.305223    9640 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:54:52.305285    9640 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:54:54.982147    9640 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:54:54.982700    9640 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:54:54.982700    9640 provision.go:143] copyHostCerts
	I0807 17:54:54.982700    9640 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0807 17:54:54.982700    9640 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0807 17:54:54.982700    9640 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0807 17:54:54.983339    9640 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0807 17:54:54.984576    9640 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0807 17:54:54.984576    9640 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0807 17:54:54.984576    9640 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0807 17:54:54.985119    9640 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0807 17:54:54.985893    9640 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0807 17:54:54.985893    9640 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0807 17:54:54.986426    9640 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0807 17:54:54.986816    9640 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0807 17:54:54.988188    9640 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-100700 san=[127.0.0.1 172.28.235.211 functional-100700 localhost minikube]
	I0807 17:54:55.249825    9640 provision.go:177] copyRemoteCerts
	I0807 17:54:55.262204    9640 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0807 17:54:55.262299    9640 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:54:57.453355    9640 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:54:57.453355    9640 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:54:57.453355    9640 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:55:00.098432    9640 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:55:00.098506    9640 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:55:00.099089    9640 sshutil.go:53] new ssh client: &{IP:172.28.235.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-100700\id_rsa Username:docker}
	I0807 17:55:00.206580    9640 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9443131s)
	I0807 17:55:00.206580    9640 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0807 17:55:00.207173    9640 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0807 17:55:00.254855    9640 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0807 17:55:00.255426    9640 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0807 17:55:00.305717    9640 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0807 17:55:00.306306    9640 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0807 17:55:00.361143    9640 provision.go:87] duration metric: took 15.2816413s to configureAuth
	I0807 17:55:00.361143    9640 buildroot.go:189] setting minikube options for container-runtime
	I0807 17:55:00.361280    9640 config.go:182] Loaded profile config "functional-100700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 17:55:00.361280    9640 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:55:02.610715    9640 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:55:02.610715    9640 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:55:02.610715    9640 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:55:05.254232    9640 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:55:05.254232    9640 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:55:05.261423    9640 main.go:141] libmachine: Using SSH client type: native
	I0807 17:55:05.261566    9640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.235.211 22 <nil> <nil>}
	I0807 17:55:05.261566    9640 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0807 17:55:05.382260    9640 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0807 17:55:05.382333    9640 buildroot.go:70] root file system type: tmpfs
	I0807 17:55:05.382547    9640 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0807 17:55:05.382667    9640 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:55:07.605523    9640 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:55:07.605523    9640 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:55:07.606065    9640 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:55:10.240684    9640 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:55:10.241636    9640 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:55:10.246755    9640 main.go:141] libmachine: Using SSH client type: native
	I0807 17:55:10.247710    9640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.235.211 22 <nil> <nil>}
	I0807 17:55:10.247710    9640 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0807 17:55:10.412284    9640 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0807 17:55:10.412284    9640 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:55:12.624596    9640 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:55:12.624596    9640 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:55:12.624738    9640 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:55:15.295355    9640 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:55:15.295467    9640 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:55:15.300784    9640 main.go:141] libmachine: Using SSH client type: native
	I0807 17:55:15.301521    9640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.235.211 22 <nil> <nil>}
	I0807 17:55:15.301521    9640 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0807 17:55:15.443152    9640 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0807 17:55:15.443152    9640 machine.go:97] duration metric: took 45.4263521s to provisionDockerMachine
	I0807 17:55:15.443152    9640 start.go:293] postStartSetup for "functional-100700" (driver="hyperv")
	I0807 17:55:15.443152    9640 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0807 17:55:15.456863    9640 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0807 17:55:15.456863    9640 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:55:17.638542    9640 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:55:17.638686    9640 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:55:17.638761    9640 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:55:20.340935    9640 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:55:20.340935    9640 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:55:20.341364    9640 sshutil.go:53] new ssh client: &{IP:172.28.235.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-100700\id_rsa Username:docker}
	I0807 17:55:20.454072    9640 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9971452s)
	I0807 17:55:20.466908    9640 ssh_runner.go:195] Run: cat /etc/os-release
	I0807 17:55:20.474748    9640 command_runner.go:130] > NAME=Buildroot
	I0807 17:55:20.474748    9640 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0807 17:55:20.474748    9640 command_runner.go:130] > ID=buildroot
	I0807 17:55:20.474748    9640 command_runner.go:130] > VERSION_ID=2023.02.9
	I0807 17:55:20.474748    9640 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0807 17:55:20.475108    9640 info.go:137] Remote host: Buildroot 2023.02.9
	I0807 17:55:20.475199    9640 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0807 17:55:20.475730    9640 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0807 17:55:20.477203    9640 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96602.pem -> 96602.pem in /etc/ssl/certs
	I0807 17:55:20.477310    9640 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96602.pem -> /etc/ssl/certs/96602.pem
	I0807 17:55:20.479140    9640 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\9660\hosts -> hosts in /etc/test/nested/copy/9660
	I0807 17:55:20.479140    9640 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\9660\hosts -> /etc/test/nested/copy/9660/hosts
	I0807 17:55:20.492095    9640 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/9660
	I0807 17:55:20.510094    9640 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96602.pem --> /etc/ssl/certs/96602.pem (1708 bytes)
	I0807 17:55:20.562405    9640 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\9660\hosts --> /etc/test/nested/copy/9660/hosts (40 bytes)
	I0807 17:55:20.615758    9640 start.go:296] duration metric: took 5.1725399s for postStartSetup
	I0807 17:55:20.615758    9640 fix.go:56] duration metric: took 53.4739751s for fixHost
	I0807 17:55:20.615758    9640 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:55:22.805803    9640 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:55:22.806253    9640 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:55:22.806352    9640 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:55:25.456279    9640 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:55:25.456279    9640 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:55:25.462627    9640 main.go:141] libmachine: Using SSH client type: native
	I0807 17:55:25.463348    9640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.235.211 22 <nil> <nil>}
	I0807 17:55:25.463348    9640 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0807 17:55:25.585562    9640 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723053325.589495644
	
	I0807 17:55:25.585681    9640 fix.go:216] guest clock: 1723053325.589495644
	I0807 17:55:25.585681    9640 fix.go:229] Guest: 2024-08-07 17:55:25.589495644 +0000 UTC Remote: 2024-08-07 17:55:20.6157586 +0000 UTC m=+59.235101201 (delta=4.973737044s)
	I0807 17:55:25.585854    9640 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:55:27.828766    9640 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:55:27.829740    9640 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:55:27.829770    9640 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:55:30.490036    9640 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:55:30.490036    9640 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:55:30.496574    9640 main.go:141] libmachine: Using SSH client type: native
	I0807 17:55:30.497360    9640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.235.211 22 <nil> <nil>}
	I0807 17:55:30.497360    9640 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1723053325
	I0807 17:55:30.643521    9640 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Aug  7 17:55:25 UTC 2024
	
	I0807 17:55:30.643521    9640 fix.go:236] clock set: Wed Aug  7 17:55:25 UTC 2024
	 (err=<nil>)
	I0807 17:55:30.643521    9640 start.go:83] releasing machines lock for "functional-100700", held for 1m3.5016092s
	I0807 17:55:30.643521    9640 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:55:32.854871    9640 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:55:32.854871    9640 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:55:32.855883    9640 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:55:35.469704    9640 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:55:35.470661    9640 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:55:35.474876    9640 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0807 17:55:35.474954    9640 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:55:35.485718    9640 ssh_runner.go:195] Run: cat /version.json
	I0807 17:55:35.485718    9640 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:55:37.804491    9640 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:55:37.804491    9640 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:55:37.804491    9640 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:55:37.805325    9640 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:55:37.805383    9640 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:55:37.805383    9640 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:55:40.605271    9640 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:55:40.606004    9640 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:55:40.606435    9640 sshutil.go:53] new ssh client: &{IP:172.28.235.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-100700\id_rsa Username:docker}
	I0807 17:55:40.637090    9640 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:55:40.637483    9640 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:55:40.637662    9640 sshutil.go:53] new ssh client: &{IP:172.28.235.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-100700\id_rsa Username:docker}
	I0807 17:55:40.700804    9640 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0807 17:55:40.700804    9640 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.2258613s)
	W0807 17:55:40.700804    9640 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0807 17:55:40.732418    9640 command_runner.go:130] > {"iso_version": "v1.33.1-1722248113-19339", "kicbase_version": "v0.0.44-1721902582-19326", "minikube_version": "v1.33.1", "commit": "b8389556a97747a5bbaa1906d238251ad536d76e"}
	I0807 17:55:40.732514    9640 ssh_runner.go:235] Completed: cat /version.json: (5.2466325s)
	I0807 17:55:40.744935    9640 ssh_runner.go:195] Run: systemctl --version
	I0807 17:55:40.761089    9640 command_runner.go:130] > systemd 252 (252)
	I0807 17:55:40.761346    9640 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0807 17:55:40.777090    9640 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0807 17:55:40.784965    9640 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0807 17:55:40.786437    9640 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0807 17:55:40.799692    9640 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0807 17:55:40.818580    9640 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0807 17:55:40.818682    9640 start.go:495] detecting cgroup driver to use...
	W0807 17:55:40.818580    9640 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0807 17:55:40.818807    9640 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0807 17:55:40.819025    9640 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0807 17:55:40.862798    9640 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0807 17:55:40.875833    9640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0807 17:55:40.914901    9640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0807 17:55:40.935788    9640 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0807 17:55:40.948485    9640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0807 17:55:40.987215    9640 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0807 17:55:41.019000    9640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0807 17:55:41.052068    9640 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0807 17:55:41.086453    9640 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0807 17:55:41.120036    9640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0807 17:55:41.152946    9640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0807 17:55:41.184605    9640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0807 17:55:41.217985    9640 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0807 17:55:41.236892    9640 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0807 17:55:41.249136    9640 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0807 17:55:41.281555    9640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 17:55:41.549644    9640 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0807 17:55:41.583843    9640 start.go:495] detecting cgroup driver to use...
	I0807 17:55:41.599033    9640 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0807 17:55:41.622559    9640 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0807 17:55:41.622602    9640 command_runner.go:130] > [Unit]
	I0807 17:55:41.622602    9640 command_runner.go:130] > Description=Docker Application Container Engine
	I0807 17:55:41.622602    9640 command_runner.go:130] > Documentation=https://docs.docker.com
	I0807 17:55:41.622602    9640 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0807 17:55:41.622602    9640 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0807 17:55:41.622657    9640 command_runner.go:130] > StartLimitBurst=3
	I0807 17:55:41.622657    9640 command_runner.go:130] > StartLimitIntervalSec=60
	I0807 17:55:41.622657    9640 command_runner.go:130] > [Service]
	I0807 17:55:41.622693    9640 command_runner.go:130] > Type=notify
	I0807 17:55:41.622693    9640 command_runner.go:130] > Restart=on-failure
	I0807 17:55:41.622730    9640 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0807 17:55:41.622772    9640 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0807 17:55:41.622772    9640 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0807 17:55:41.622772    9640 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0807 17:55:41.622826    9640 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0807 17:55:41.622868    9640 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0807 17:55:41.622868    9640 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0807 17:55:41.622868    9640 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0807 17:55:41.622912    9640 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0807 17:55:41.622912    9640 command_runner.go:130] > ExecStart=
	I0807 17:55:41.622984    9640 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0807 17:55:41.622984    9640 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0807 17:55:41.623034    9640 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0807 17:55:41.623072    9640 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0807 17:55:41.623113    9640 command_runner.go:130] > LimitNOFILE=infinity
	I0807 17:55:41.623113    9640 command_runner.go:130] > LimitNPROC=infinity
	I0807 17:55:41.623113    9640 command_runner.go:130] > LimitCORE=infinity
	I0807 17:55:41.623152    9640 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0807 17:55:41.623152    9640 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0807 17:55:41.623203    9640 command_runner.go:130] > TasksMax=infinity
	I0807 17:55:41.623203    9640 command_runner.go:130] > TimeoutStartSec=0
	I0807 17:55:41.623203    9640 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0807 17:55:41.623242    9640 command_runner.go:130] > Delegate=yes
	I0807 17:55:41.623242    9640 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0807 17:55:41.623242    9640 command_runner.go:130] > KillMode=process
	I0807 17:55:41.623242    9640 command_runner.go:130] > [Install]
	I0807 17:55:41.623283    9640 command_runner.go:130] > WantedBy=multi-user.target
	I0807 17:55:41.636117    9640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0807 17:55:41.668865    9640 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0807 17:55:41.727112    9640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0807 17:55:41.764405    9640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0807 17:55:41.794749    9640 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0807 17:55:41.829902    9640 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0807 17:55:41.842525    9640 ssh_runner.go:195] Run: which cri-dockerd
	I0807 17:55:41.848493    9640 command_runner.go:130] > /usr/bin/cri-dockerd
	I0807 17:55:41.861146    9640 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0807 17:55:41.882147    9640 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0807 17:55:41.930040    9640 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0807 17:55:42.196729    9640 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0807 17:55:42.458555    9640 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0807 17:55:42.458803    9640 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0807 17:55:42.508966    9640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 17:55:42.772700    9640 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0807 17:55:55.791246    9640 ssh_runner.go:235] Completed: sudo systemctl restart docker: (13.0182981s)
	I0807 17:55:55.803968    9640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0807 17:55:55.845783    9640 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0807 17:55:55.885740    9640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0807 17:55:55.920738    9640 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0807 17:55:56.140649    9640 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0807 17:55:56.346330    9640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 17:55:56.552904    9640 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0807 17:55:56.594406    9640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0807 17:55:56.628658    9640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 17:55:56.843687    9640 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0807 17:55:56.995272    9640 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0807 17:55:57.008272    9640 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0807 17:55:57.019262    9640 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0807 17:55:57.019670    9640 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0807 17:55:57.019670    9640 command_runner.go:130] > Device: 0,22	Inode: 1496        Links: 1
	I0807 17:55:57.019670    9640 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0807 17:55:57.019670    9640 command_runner.go:130] > Access: 2024-08-07 17:55:56.893349435 +0000
	I0807 17:55:57.019670    9640 command_runner.go:130] > Modify: 2024-08-07 17:55:56.893349435 +0000
	I0807 17:55:57.019670    9640 command_runner.go:130] > Change: 2024-08-07 17:55:56.899349455 +0000
	I0807 17:55:57.019670    9640 command_runner.go:130] >  Birth: -
	I0807 17:55:57.019800    9640 start.go:563] Will wait 60s for crictl version
	I0807 17:55:57.032443    9640 ssh_runner.go:195] Run: which crictl
	I0807 17:55:57.041267    9640 command_runner.go:130] > /usr/bin/crictl
	I0807 17:55:57.054477    9640 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0807 17:55:57.136940    9640 command_runner.go:130] > Version:  0.1.0
	I0807 17:55:57.136940    9640 command_runner.go:130] > RuntimeName:  docker
	I0807 17:55:57.136940    9640 command_runner.go:130] > RuntimeVersion:  27.1.1
	I0807 17:55:57.136940    9640 command_runner.go:130] > RuntimeApiVersion:  v1
	I0807 17:55:57.136940    9640 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.1
	RuntimeApiVersion:  v1
	I0807 17:55:57.145025    9640 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0807 17:55:57.182608    9640 command_runner.go:130] > 27.1.1
	I0807 17:55:57.191522    9640 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0807 17:55:57.237723    9640 command_runner.go:130] > 27.1.1
	I0807 17:55:57.246829    9640 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...
	I0807 17:55:57.246829    9640 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0807 17:55:57.250517    9640 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0807 17:55:57.250517    9640 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0807 17:55:57.250517    9640 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0807 17:55:57.250517    9640 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:f6:3a:6a Flags:up|broadcast|multicast|running}
	I0807 17:55:57.253417    9640 ip.go:210] interface addr: fe80::e7eb:b592:d388:ff99/64
	I0807 17:55:57.253417    9640 ip.go:210] interface addr: 172.28.224.1/20
	I0807 17:55:57.264108    9640 ssh_runner.go:195] Run: grep 172.28.224.1	host.minikube.internal$ /etc/hosts
	I0807 17:55:57.271874    9640 command_runner.go:130] > 172.28.224.1	host.minikube.internal
	I0807 17:55:57.272421    9640 kubeadm.go:883] updating cluster {Name:functional-100700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.30.3 ClusterName:functional-100700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.235.211 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0807 17:55:57.272673    9640 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0807 17:55:57.285088    9640 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0807 17:55:57.309911    9640 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.3
	I0807 17:55:57.309911    9640 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.3
	I0807 17:55:57.309911    9640 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.3
	I0807 17:55:57.309911    9640 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.3
	I0807 17:55:57.309911    9640 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0807 17:55:57.309911    9640 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0807 17:55:57.310917    9640 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0807 17:55:57.311071    9640 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0807 17:55:57.311071    9640 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0807 17:55:57.311071    9640 docker.go:615] Images already preloaded, skipping extraction
	I0807 17:55:57.320951    9640 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0807 17:55:57.349922    9640 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.3
	I0807 17:55:57.349922    9640 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.3
	I0807 17:55:57.349922    9640 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.3
	I0807 17:55:57.349922    9640 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.3
	I0807 17:55:57.349922    9640 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0807 17:55:57.349922    9640 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0807 17:55:57.349922    9640 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0807 17:55:57.349922    9640 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0807 17:55:57.349922    9640 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0807 17:55:57.349922    9640 cache_images.go:84] Images are preloaded, skipping loading
	I0807 17:55:57.349922    9640 kubeadm.go:934] updating node { 172.28.235.211 8441 v1.30.3 docker true true} ...
	I0807 17:55:57.349922    9640 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-100700 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.28.235.211
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:functional-100700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0807 17:55:57.358914    9640 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0807 17:55:57.429795    9640 command_runner.go:130] > cgroupfs
	I0807 17:55:57.429795    9640 cni.go:84] Creating CNI manager for ""
	I0807 17:55:57.429795    9640 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0807 17:55:57.429795    9640 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0807 17:55:57.429795    9640 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.28.235.211 APIServerPort:8441 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-100700 NodeName:functional-100700 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.28.235.211"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.28.235.211 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0807 17:55:57.430454    9640 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.28.235.211
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-100700"
	  kubeletExtraArgs:
	    node-ip: 172.28.235.211
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.28.235.211"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0807 17:55:57.441354    9640 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0807 17:55:57.459030    9640 command_runner.go:130] > kubeadm
	I0807 17:55:57.459030    9640 command_runner.go:130] > kubectl
	I0807 17:55:57.459030    9640 command_runner.go:130] > kubelet
	I0807 17:55:57.459030    9640 binaries.go:44] Found k8s binaries, skipping transfer
	I0807 17:55:57.470294    9640 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0807 17:55:57.487408    9640 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0807 17:55:57.518670    9640 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0807 17:55:57.554863    9640 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2165 bytes)
	I0807 17:55:57.599788    9640 ssh_runner.go:195] Run: grep 172.28.235.211	control-plane.minikube.internal$ /etc/hosts
	I0807 17:55:57.605248    9640 command_runner.go:130] > 172.28.235.211	control-plane.minikube.internal
	I0807 17:55:57.618801    9640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 17:55:57.836820    9640 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0807 17:55:57.863289    9640 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-100700 for IP: 172.28.235.211
	I0807 17:55:57.863289    9640 certs.go:194] generating shared ca certs ...
	I0807 17:55:57.863289    9640 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 17:55:57.864284    9640 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0807 17:55:57.864284    9640 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0807 17:55:57.864284    9640 certs.go:256] generating profile certs ...
	I0807 17:55:57.865285    9640 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-100700\client.key
	I0807 17:55:57.865285    9640 certs.go:359] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-100700\apiserver.key.8ae1dd7b
	I0807 17:55:57.866286    9640 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-100700\proxy-client.key
	I0807 17:55:57.866286    9640 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0807 17:55:57.866286    9640 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0807 17:55:57.866286    9640 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0807 17:55:57.866286    9640 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0807 17:55:57.866286    9640 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-100700\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0807 17:55:57.866286    9640 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-100700\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0807 17:55:57.867285    9640 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-100700\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0807 17:55:57.867285    9640 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-100700\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0807 17:55:57.867285    9640 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9660.pem (1338 bytes)
	W0807 17:55:57.868296    9640 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9660_empty.pem, impossibly tiny 0 bytes
	I0807 17:55:57.868296    9640 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0807 17:55:57.868296    9640 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0807 17:55:57.869300    9640 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0807 17:55:57.869300    9640 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0807 17:55:57.869300    9640 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96602.pem (1708 bytes)
	I0807 17:55:57.869300    9640 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9660.pem -> /usr/share/ca-certificates/9660.pem
	I0807 17:55:57.870325    9640 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96602.pem -> /usr/share/ca-certificates/96602.pem
	I0807 17:55:57.870325    9640 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0807 17:55:57.871286    9640 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0807 17:55:57.919161    9640 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0807 17:55:57.965815    9640 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0807 17:55:58.011058    9640 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0807 17:55:58.056410    9640 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-100700\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0807 17:55:58.104724    9640 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-100700\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0807 17:55:58.150333    9640 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-100700\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0807 17:55:58.201969    9640 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-100700\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0807 17:55:58.247725    9640 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9660.pem --> /usr/share/ca-certificates/9660.pem (1338 bytes)
	I0807 17:55:58.292464    9640 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96602.pem --> /usr/share/ca-certificates/96602.pem (1708 bytes)
	I0807 17:55:58.337655    9640 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0807 17:55:58.382941    9640 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0807 17:55:58.424369    9640 ssh_runner.go:195] Run: openssl version
	I0807 17:55:58.431598    9640 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0807 17:55:58.443539    9640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9660.pem && ln -fs /usr/share/ca-certificates/9660.pem /etc/ssl/certs/9660.pem"
	I0807 17:55:58.474780    9640 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9660.pem
	I0807 17:55:58.482288    9640 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug  7 17:50 /usr/share/ca-certificates/9660.pem
	I0807 17:55:58.482288    9640 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  7 17:50 /usr/share/ca-certificates/9660.pem
	I0807 17:55:58.494365    9640 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9660.pem
	I0807 17:55:58.503154    9640 command_runner.go:130] > 51391683
	I0807 17:55:58.514672    9640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9660.pem /etc/ssl/certs/51391683.0"
	I0807 17:55:58.546533    9640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/96602.pem && ln -fs /usr/share/ca-certificates/96602.pem /etc/ssl/certs/96602.pem"
	I0807 17:55:58.578255    9640 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/96602.pem
	I0807 17:55:58.585217    9640 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug  7 17:50 /usr/share/ca-certificates/96602.pem
	I0807 17:55:58.585217    9640 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  7 17:50 /usr/share/ca-certificates/96602.pem
	I0807 17:55:58.596717    9640 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/96602.pem
	I0807 17:55:58.611238    9640 command_runner.go:130] > 3ec20f2e
	I0807 17:55:58.622426    9640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/96602.pem /etc/ssl/certs/3ec20f2e.0"
	I0807 17:55:58.650991    9640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0807 17:55:58.680178    9640 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0807 17:55:58.686604    9640 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug  7 17:33 /usr/share/ca-certificates/minikubeCA.pem
	I0807 17:55:58.686697    9640 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  7 17:33 /usr/share/ca-certificates/minikubeCA.pem
	I0807 17:55:58.696287    9640 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0807 17:55:58.706294    9640 command_runner.go:130] > b5213941
	I0807 17:55:58.717461    9640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0807 17:55:58.747794    9640 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0807 17:55:58.757284    9640 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0807 17:55:58.757446    9640 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0807 17:55:58.757446    9640 command_runner.go:130] > Device: 8,1	Inode: 7337298     Links: 1
	I0807 17:55:58.757446    9640 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0807 17:55:58.757446    9640 command_runner.go:130] > Access: 2024-08-07 17:53:10.698672865 +0000
	I0807 17:55:58.757446    9640 command_runner.go:130] > Modify: 2024-08-07 17:53:10.698672865 +0000
	I0807 17:55:58.757446    9640 command_runner.go:130] > Change: 2024-08-07 17:53:10.698672865 +0000
	I0807 17:55:58.757446    9640 command_runner.go:130] >  Birth: 2024-08-07 17:53:10.698672865 +0000
	I0807 17:55:58.771614    9640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0807 17:55:58.781069    9640 command_runner.go:130] > Certificate will not expire
	I0807 17:55:58.792369    9640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0807 17:55:58.801747    9640 command_runner.go:130] > Certificate will not expire
	I0807 17:55:58.813753    9640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0807 17:55:58.822949    9640 command_runner.go:130] > Certificate will not expire
	I0807 17:55:58.833701    9640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0807 17:55:58.842872    9640 command_runner.go:130] > Certificate will not expire
	I0807 17:55:58.855634    9640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0807 17:55:58.866228    9640 command_runner.go:130] > Certificate will not expire
	I0807 17:55:58.877812    9640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0807 17:55:58.886841    9640 command_runner.go:130] > Certificate will not expire
	I0807 17:55:58.886841    9640 kubeadm.go:392] StartCluster: {Name:functional-100700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.3 ClusterName:functional-100700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.235.211 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 17:55:58.897406    9640 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0807 17:55:58.941623    9640 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0807 17:55:58.961618    9640 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0807 17:55:58.961618    9640 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0807 17:55:58.961618    9640 command_runner.go:130] > /var/lib/minikube/etcd:
	I0807 17:55:58.961618    9640 command_runner.go:130] > member
	I0807 17:55:58.961618    9640 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0807 17:55:58.961618    9640 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0807 17:55:58.972678    9640 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0807 17:55:58.989268    9640 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0807 17:55:58.990658    9640 kubeconfig.go:125] found "functional-100700" server: "https://172.28.235.211:8441"
	I0807 17:55:58.992646    9640 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0807 17:55:58.993679    9640 kapi.go:59] client config for functional-100700: &rest.Config{Host:"https://172.28.235.211:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-100700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-100700\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil),
CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1da64c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0807 17:55:58.995705    9640 cert_rotation.go:137] Starting client certificate rotation controller
	I0807 17:55:59.010299    9640 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0807 17:55:59.027854    9640 kubeadm.go:630] The running cluster does not require reconfiguration: 172.28.235.211
	I0807 17:55:59.028662    9640 kubeadm.go:1160] stopping kube-system containers ...
	I0807 17:55:59.038331    9640 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0807 17:55:59.073740    9640 command_runner.go:130] > 1ca5873cb027
	I0807 17:55:59.073740    9640 command_runner.go:130] > 32e3bea2a931
	I0807 17:55:59.073740    9640 command_runner.go:130] > 8257548df8d0
	I0807 17:55:59.073740    9640 command_runner.go:130] > 9f7b90986285
	I0807 17:55:59.073740    9640 command_runner.go:130] > a334b1535e2f
	I0807 17:55:59.073740    9640 command_runner.go:130] > 4f7e1db775dc
	I0807 17:55:59.073740    9640 command_runner.go:130] > 76120dfe1c32
	I0807 17:55:59.073740    9640 command_runner.go:130] > 88ef6e03a7d4
	I0807 17:55:59.073740    9640 command_runner.go:130] > 03079679d68c
	I0807 17:55:59.073740    9640 command_runner.go:130] > 8e6d65d222dd
	I0807 17:55:59.073740    9640 command_runner.go:130] > 6f09e3713754
	I0807 17:55:59.073740    9640 command_runner.go:130] > f87ac0281bc2
	I0807 17:55:59.073740    9640 command_runner.go:130] > b9283200bae3
	I0807 17:55:59.073740    9640 command_runner.go:130] > f907706c00eb
	I0807 17:55:59.073740    9640 docker.go:483] Stopping containers: [1ca5873cb027 32e3bea2a931 8257548df8d0 9f7b90986285 a334b1535e2f 4f7e1db775dc 76120dfe1c32 88ef6e03a7d4 03079679d68c 8e6d65d222dd 6f09e3713754 f87ac0281bc2 b9283200bae3 f907706c00eb]
	I0807 17:55:59.081717    9640 ssh_runner.go:195] Run: docker stop 1ca5873cb027 32e3bea2a931 8257548df8d0 9f7b90986285 a334b1535e2f 4f7e1db775dc 76120dfe1c32 88ef6e03a7d4 03079679d68c 8e6d65d222dd 6f09e3713754 f87ac0281bc2 b9283200bae3 f907706c00eb
	I0807 17:55:59.113381    9640 command_runner.go:130] > 1ca5873cb027
	I0807 17:55:59.113381    9640 command_runner.go:130] > 32e3bea2a931
	I0807 17:55:59.113381    9640 command_runner.go:130] > 8257548df8d0
	I0807 17:55:59.113381    9640 command_runner.go:130] > 9f7b90986285
	I0807 17:55:59.113381    9640 command_runner.go:130] > a334b1535e2f
	I0807 17:55:59.113381    9640 command_runner.go:130] > 4f7e1db775dc
	I0807 17:55:59.113381    9640 command_runner.go:130] > 76120dfe1c32
	I0807 17:55:59.113381    9640 command_runner.go:130] > 88ef6e03a7d4
	I0807 17:55:59.113381    9640 command_runner.go:130] > 03079679d68c
	I0807 17:55:59.113381    9640 command_runner.go:130] > 8e6d65d222dd
	I0807 17:55:59.113381    9640 command_runner.go:130] > 6f09e3713754
	I0807 17:55:59.113381    9640 command_runner.go:130] > f87ac0281bc2
	I0807 17:55:59.113381    9640 command_runner.go:130] > b9283200bae3
	I0807 17:55:59.113381    9640 command_runner.go:130] > f907706c00eb
	I0807 17:55:59.125957    9640 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0807 17:55:59.198947    9640 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0807 17:55:59.221028    9640 command_runner.go:130] > -rw------- 1 root root 5651 Aug  7 17:53 /etc/kubernetes/admin.conf
	I0807 17:55:59.221028    9640 command_runner.go:130] > -rw------- 1 root root 5658 Aug  7 17:53 /etc/kubernetes/controller-manager.conf
	I0807 17:55:59.221028    9640 command_runner.go:130] > -rw------- 1 root root 2007 Aug  7 17:53 /etc/kubernetes/kubelet.conf
	I0807 17:55:59.221028    9640 command_runner.go:130] > -rw------- 1 root root 5606 Aug  7 17:53 /etc/kubernetes/scheduler.conf
	I0807 17:55:59.221028    9640 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5651 Aug  7 17:53 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5658 Aug  7 17:53 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Aug  7 17:53 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5606 Aug  7 17:53 /etc/kubernetes/scheduler.conf
	
	I0807 17:55:59.236528    9640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0807 17:55:59.255063    9640 command_runner.go:130] >     server: https://control-plane.minikube.internal:8441
	I0807 17:55:59.265105    9640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0807 17:55:59.285089    9640 command_runner.go:130] >     server: https://control-plane.minikube.internal:8441
	I0807 17:55:59.296070    9640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0807 17:55:59.314629    9640 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0807 17:55:59.326830    9640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0807 17:55:59.354703    9640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0807 17:55:59.370715    9640 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0807 17:55:59.382254    9640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0807 17:55:59.411302    9640 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0807 17:55:59.432855    9640 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0807 17:55:59.512305    9640 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0807 17:55:59.512305    9640 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0807 17:55:59.512305    9640 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0807 17:55:59.512305    9640 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0807 17:55:59.512305    9640 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0807 17:55:59.512305    9640 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0807 17:55:59.512305    9640 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0807 17:55:59.512305    9640 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0807 17:55:59.512305    9640 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0807 17:55:59.512305    9640 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0807 17:55:59.512305    9640 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0807 17:55:59.512305    9640 command_runner.go:130] > [certs] Using the existing "sa" key
	I0807 17:55:59.512305    9640 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0807 17:56:01.194747    9640 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0807 17:56:01.194747    9640 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
	I0807 17:56:01.194747    9640 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/super-admin.conf"
	I0807 17:56:01.194747    9640 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
	I0807 17:56:01.194747    9640 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0807 17:56:01.194747    9640 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0807 17:56:01.194747    9640 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.6824205s)
	I0807 17:56:01.194747    9640 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0807 17:56:01.504792    9640 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0807 17:56:01.504896    9640 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0807 17:56:01.504896    9640 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0807 17:56:01.504896    9640 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0807 17:56:01.596012    9640 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0807 17:56:01.596012    9640 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0807 17:56:01.596012    9640 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0807 17:56:01.596012    9640 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0807 17:56:01.596012    9640 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0807 17:56:01.705422    9640 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0807 17:56:01.705422    9640 api_server.go:52] waiting for apiserver process to appear ...
	I0807 17:56:01.716454    9640 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0807 17:56:02.223117    9640 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0807 17:56:02.730240    9640 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0807 17:56:03.227780    9640 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0807 17:56:03.727257    9640 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0807 17:56:03.759228    9640 command_runner.go:130] > 5343
	I0807 17:56:03.759391    9640 api_server.go:72] duration metric: took 2.0539425s to wait for apiserver process to appear ...
	I0807 17:56:03.759391    9640 api_server.go:88] waiting for apiserver healthz status ...
	I0807 17:56:03.759524    9640 api_server.go:253] Checking apiserver healthz at https://172.28.235.211:8441/healthz ...
	I0807 17:56:06.667355    9640 api_server.go:279] https://172.28.235.211:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0807 17:56:06.667355    9640 api_server.go:103] status: https://172.28.235.211:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0807 17:56:06.667929    9640 api_server.go:253] Checking apiserver healthz at https://172.28.235.211:8441/healthz ...
	I0807 17:56:06.723956    9640 api_server.go:279] https://172.28.235.211:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0807 17:56:06.724464    9640 api_server.go:103] status: https://172.28.235.211:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0807 17:56:06.764153    9640 api_server.go:253] Checking apiserver healthz at https://172.28.235.211:8441/healthz ...
	I0807 17:56:06.779650    9640 api_server.go:279] https://172.28.235.211:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0807 17:56:06.780023    9640 api_server.go:103] status: https://172.28.235.211:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0807 17:56:07.265466    9640 api_server.go:253] Checking apiserver healthz at https://172.28.235.211:8441/healthz ...
	I0807 17:56:07.276052    9640 api_server.go:279] https://172.28.235.211:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0807 17:56:07.276052    9640 api_server.go:103] status: https://172.28.235.211:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0807 17:56:07.770181    9640 api_server.go:253] Checking apiserver healthz at https://172.28.235.211:8441/healthz ...
	I0807 17:56:07.780848    9640 api_server.go:279] https://172.28.235.211:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0807 17:56:07.780944    9640 api_server.go:103] status: https://172.28.235.211:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0807 17:56:08.262442    9640 api_server.go:253] Checking apiserver healthz at https://172.28.235.211:8441/healthz ...
	I0807 17:56:08.271103    9640 api_server.go:279] https://172.28.235.211:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0807 17:56:08.271129    9640 api_server.go:103] status: https://172.28.235.211:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0807 17:56:08.772442    9640 api_server.go:253] Checking apiserver healthz at https://172.28.235.211:8441/healthz ...
	I0807 17:56:08.795714    9640 api_server.go:279] https://172.28.235.211:8441/healthz returned 200:
	ok
	I0807 17:56:08.796378    9640 round_trippers.go:463] GET https://172.28.235.211:8441/version
	I0807 17:56:08.796378    9640 round_trippers.go:469] Request Headers:
	I0807 17:56:08.796446    9640 round_trippers.go:473]     Accept: application/json, */*
	I0807 17:56:08.796446    9640 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 17:56:08.808547    9640 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0807 17:56:08.808547    9640 round_trippers.go:577] Response Headers:
	I0807 17:56:08.809529    9640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 17:56:08.809529    9640 round_trippers.go:580]     Content-Type: application/json
	I0807 17:56:08.809529    9640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 188f7f8b-bd84-432a-9372-91c329f39bfc
	I0807 17:56:08.809529    9640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 858f2d38-9d61-42b7-a235-674c73f98f65
	I0807 17:56:08.809529    9640 round_trippers.go:580]     Content-Length: 263
	I0807 17:56:08.809529    9640 round_trippers.go:580]     Date: Wed, 07 Aug 2024 17:56:08 GMT
	I0807 17:56:08.809529    9640 round_trippers.go:580]     Audit-Id: ead39ee0-cdfd-4e3f-bbd2-634eac029347
	I0807 17:56:08.809529    9640 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.3",
	  "gitCommit": "6fc0a69044f1ac4c13841ec4391224a2df241460",
	  "gitTreeState": "clean",
	  "buildDate": "2024-07-16T23:48:12Z",
	  "goVersion": "go1.22.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0807 17:56:08.809529    9640 api_server.go:141] control plane version: v1.30.3
	I0807 17:56:08.809529    9640 api_server.go:131] duration metric: took 5.0500727s to wait for apiserver health ...
	I0807 17:56:08.809529    9640 cni.go:84] Creating CNI manager for ""
	I0807 17:56:08.809529    9640 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0807 17:56:08.817533    9640 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0807 17:56:08.832531    9640 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0807 17:56:08.875339    9640 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0807 17:56:08.944115    9640 system_pods.go:43] waiting for kube-system pods to appear ...
	I0807 17:56:08.944115    9640 round_trippers.go:463] GET https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods
	I0807 17:56:08.944115    9640 round_trippers.go:469] Request Headers:
	I0807 17:56:08.944115    9640 round_trippers.go:473]     Accept: application/json, */*
	I0807 17:56:08.944115    9640 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 17:56:08.953594    9640 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0807 17:56:08.953667    9640 round_trippers.go:577] Response Headers:
	I0807 17:56:08.953667    9640 round_trippers.go:580]     Date: Wed, 07 Aug 2024 17:56:08 GMT
	I0807 17:56:08.953667    9640 round_trippers.go:580]     Audit-Id: 7659b2fc-5183-4bc4-8908-408336907cf1
	I0807 17:56:08.953667    9640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 17:56:08.953667    9640 round_trippers.go:580]     Content-Type: application/json
	I0807 17:56:08.953667    9640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 188f7f8b-bd84-432a-9372-91c329f39bfc
	I0807 17:56:08.953762    9640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 858f2d38-9d61-42b7-a235-674c73f98f65
	I0807 17:56:08.954890    9640 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"596"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-wwrwt","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5ddd5aa0-8aab-423c-855d-b8ea1633db28","resourceVersion":"575","creationTimestamp":"2024-08-07T17:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"47e7e42e-7133-43ea-9d53-4694600f4ac1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T17:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"47e7e42e-7133-43ea-9d53-4694600f4ac1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 51568 chars]
	I0807 17:56:08.960411    9640 system_pods.go:59] 7 kube-system pods found
	I0807 17:56:08.960411    9640 system_pods.go:61] "coredns-7db6d8ff4d-wwrwt" [5ddd5aa0-8aab-423c-855d-b8ea1633db28] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0807 17:56:08.960411    9640 system_pods.go:61] "etcd-functional-100700" [a5c7f27d-b19f-437f-a53a-2bf11d9ac9ca] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0807 17:56:08.960411    9640 system_pods.go:61] "kube-apiserver-functional-100700" [5249a821-fdb0-4a53-9e1e-ff9336ba130f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0807 17:56:08.960411    9640 system_pods.go:61] "kube-controller-manager-functional-100700" [1f265aaa-5e7b-41b5-9fa6-ce6a907b24a9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0807 17:56:08.960411    9640 system_pods.go:61] "kube-proxy-fhgrj" [7777c5e7-cff4-448e-9880-a3b6c6264025] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0807 17:56:08.960411    9640 system_pods.go:61] "kube-scheduler-functional-100700" [1dbe4230-2246-468f-abe1-077025453579] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0807 17:56:08.960411    9640 system_pods.go:61] "storage-provisioner" [6b73faee-4244-4a09-840f-e9d22cedafe6] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0807 17:56:08.960411    9640 system_pods.go:74] duration metric: took 16.2955ms to wait for pod list to return data ...
	I0807 17:56:08.960411    9640 node_conditions.go:102] verifying NodePressure condition ...
	I0807 17:56:08.960411    9640 round_trippers.go:463] GET https://172.28.235.211:8441/api/v1/nodes
	I0807 17:56:08.960411    9640 round_trippers.go:469] Request Headers:
	I0807 17:56:08.960411    9640 round_trippers.go:473]     Accept: application/json, */*
	I0807 17:56:08.960411    9640 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 17:56:08.967385    9640 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0807 17:56:08.967385    9640 round_trippers.go:577] Response Headers:
	I0807 17:56:08.967385    9640 round_trippers.go:580]     Audit-Id: 29503119-5994-4241-a9be-6c157600287d
	I0807 17:56:08.967385    9640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 17:56:08.967385    9640 round_trippers.go:580]     Content-Type: application/json
	I0807 17:56:08.968078    9640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 188f7f8b-bd84-432a-9372-91c329f39bfc
	I0807 17:56:08.968078    9640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 858f2d38-9d61-42b7-a235-674c73f98f65
	I0807 17:56:08.968078    9640 round_trippers.go:580]     Date: Wed, 07 Aug 2024 17:56:08 GMT
	I0807 17:56:08.968358    9640 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"596"},"items":[{"metadata":{"name":"functional-100700","uid":"bf51b3aa-e152-4402-bf4d-a2293e078735","resourceVersion":"538","creationTimestamp":"2024-08-07T17:53:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-100700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"functional-100700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T17_53_24_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedF
ields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","ti [truncated 4841 chars]
	I0807 17:56:08.969423    9640 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0807 17:56:08.969511    9640 node_conditions.go:123] node cpu capacity is 2
	I0807 17:56:08.969593    9640 node_conditions.go:105] duration metric: took 9.1825ms to run NodePressure ...
	I0807 17:56:08.969663    9640 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0807 17:56:09.653582    9640 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0807 17:56:09.653697    9640 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0807 17:56:09.653697    9640 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0807 17:56:09.653943    9640 round_trippers.go:463] GET https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0807 17:56:09.654058    9640 round_trippers.go:469] Request Headers:
	I0807 17:56:09.654058    9640 round_trippers.go:473]     Accept: application/json, */*
	I0807 17:56:09.654058    9640 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 17:56:09.658695    9640 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 17:56:09.658695    9640 round_trippers.go:577] Response Headers:
	I0807 17:56:09.658695    9640 round_trippers.go:580]     Date: Wed, 07 Aug 2024 17:56:09 GMT
	I0807 17:56:09.658695    9640 round_trippers.go:580]     Audit-Id: b113dd96-c389-4f1a-b6d1-e79f678f4f5f
	I0807 17:56:09.658695    9640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 17:56:09.658695    9640 round_trippers.go:580]     Content-Type: application/json
	I0807 17:56:09.658695    9640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 188f7f8b-bd84-432a-9372-91c329f39bfc
	I0807 17:56:09.658695    9640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 858f2d38-9d61-42b7-a235-674c73f98f65
	I0807 17:56:09.659714    9640 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"604"},"items":[{"metadata":{"name":"etcd-functional-100700","namespace":"kube-system","uid":"a5c7f27d-b19f-437f-a53a-2bf11d9ac9ca","resourceVersion":"565","creationTimestamp":"2024-08-07T17:53:24Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.28.235.211:2379","kubernetes.io/config.hash":"beb9bcbcae0a46c5b0c329e08dd8f948","kubernetes.io/config.mirror":"beb9bcbcae0a46c5b0c329e08dd8f948","kubernetes.io/config.seen":"2024-08-07T17:53:23.867528890Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-100700","uid":"bf51b3aa-e152-4402-bf4d-a2293e078735","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-07T17:53:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f [truncated 30988 chars]
	I0807 17:56:09.661689    9640 kubeadm.go:739] kubelet initialised
	I0807 17:56:09.661689    9640 kubeadm.go:740] duration metric: took 7.9917ms waiting for restarted kubelet to initialise ...
	I0807 17:56:09.661689    9640 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0807 17:56:09.661689    9640 round_trippers.go:463] GET https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods
	I0807 17:56:09.661689    9640 round_trippers.go:469] Request Headers:
	I0807 17:56:09.661689    9640 round_trippers.go:473]     Accept: application/json, */*
	I0807 17:56:09.661689    9640 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 17:56:09.669604    9640 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0807 17:56:09.669604    9640 round_trippers.go:577] Response Headers:
	I0807 17:56:09.669604    9640 round_trippers.go:580]     Audit-Id: 10ed69b6-0aa7-4e19-b00d-6f65bb4e8c4a
	I0807 17:56:09.669604    9640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 17:56:09.669604    9640 round_trippers.go:580]     Content-Type: application/json
	I0807 17:56:09.669604    9640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 188f7f8b-bd84-432a-9372-91c329f39bfc
	I0807 17:56:09.669604    9640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 858f2d38-9d61-42b7-a235-674c73f98f65
	I0807 17:56:09.669604    9640 round_trippers.go:580]     Date: Wed, 07 Aug 2024 17:56:09 GMT
	I0807 17:56:09.671546    9640 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"604"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-wwrwt","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5ddd5aa0-8aab-423c-855d-b8ea1633db28","resourceVersion":"575","creationTimestamp":"2024-08-07T17:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"47e7e42e-7133-43ea-9d53-4694600f4ac1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T17:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"47e7e42e-7133-43ea-9d53-4694600f4ac1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 51377 chars]
	I0807 17:56:09.673521    9640 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-wwrwt" in "kube-system" namespace to be "Ready" ...
	I0807 17:56:09.673521    9640 round_trippers.go:463] GET https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-wwrwt
	I0807 17:56:09.673521    9640 round_trippers.go:469] Request Headers:
	I0807 17:56:09.673521    9640 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 17:56:09.673521    9640 round_trippers.go:473]     Accept: application/json, */*
	I0807 17:56:09.676527    9640 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 17:56:09.676527    9640 round_trippers.go:577] Response Headers:
	I0807 17:56:09.676527    9640 round_trippers.go:580]     Content-Type: application/json
	I0807 17:56:09.676527    9640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 188f7f8b-bd84-432a-9372-91c329f39bfc
	I0807 17:56:09.676527    9640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 858f2d38-9d61-42b7-a235-674c73f98f65
	I0807 17:56:09.676527    9640 round_trippers.go:580]     Date: Wed, 07 Aug 2024 17:56:09 GMT
	I0807 17:56:09.676527    9640 round_trippers.go:580]     Audit-Id: 9f50aeaf-255a-4b94-af2f-3cb48cdbe32a
	I0807 17:56:09.676527    9640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 17:56:09.676527    9640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-wwrwt","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5ddd5aa0-8aab-423c-855d-b8ea1633db28","resourceVersion":"575","creationTimestamp":"2024-08-07T17:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"47e7e42e-7133-43ea-9d53-4694600f4ac1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T17:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"47e7e42e-7133-43ea-9d53-4694600f4ac1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6505 chars]
	I0807 17:56:09.677526    9640 round_trippers.go:463] GET https://172.28.235.211:8441/api/v1/nodes/functional-100700
	I0807 17:56:09.677526    9640 round_trippers.go:469] Request Headers:
	I0807 17:56:09.677526    9640 round_trippers.go:473]     Accept: application/json, */*
	I0807 17:56:09.677526    9640 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 17:56:09.679528    9640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 17:56:09.680524    9640 round_trippers.go:577] Response Headers:
	I0807 17:56:09.680524    9640 round_trippers.go:580]     Date: Wed, 07 Aug 2024 17:56:09 GMT
	I0807 17:56:09.680524    9640 round_trippers.go:580]     Audit-Id: b9a814e2-b60f-4c6e-b6f0-b3ba8707189d
	I0807 17:56:09.680524    9640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 17:56:09.680524    9640 round_trippers.go:580]     Content-Type: application/json
	I0807 17:56:09.680524    9640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 188f7f8b-bd84-432a-9372-91c329f39bfc
	I0807 17:56:09.680524    9640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 858f2d38-9d61-42b7-a235-674c73f98f65
	I0807 17:56:09.680524    9640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-100700","uid":"bf51b3aa-e152-4402-bf4d-a2293e078735","resourceVersion":"538","creationTimestamp":"2024-08-07T17:53:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-100700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"functional-100700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T17_53_24_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-08-07T17:53:20Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0807 17:56:10.179177    9640 round_trippers.go:463] GET https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-wwrwt
	I0807 17:56:10.179177    9640 round_trippers.go:469] Request Headers:
	I0807 17:56:10.179177    9640 round_trippers.go:473]     Accept: application/json, */*
	I0807 17:56:10.179177    9640 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 17:56:10.183803    9640 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 17:56:10.183803    9640 round_trippers.go:577] Response Headers:
	I0807 17:56:10.183803    9640 round_trippers.go:580]     Content-Type: application/json
	I0807 17:56:10.183803    9640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 188f7f8b-bd84-432a-9372-91c329f39bfc
	I0807 17:56:10.183803    9640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 858f2d38-9d61-42b7-a235-674c73f98f65
	I0807 17:56:10.183803    9640 round_trippers.go:580]     Date: Wed, 07 Aug 2024 17:56:10 GMT
	I0807 17:56:10.183803    9640 round_trippers.go:580]     Audit-Id: 96ca7601-bcdd-46b6-b64a-d2f603aaae69
	I0807 17:56:10.183803    9640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 17:56:10.183803    9640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-wwrwt","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5ddd5aa0-8aab-423c-855d-b8ea1633db28","resourceVersion":"575","creationTimestamp":"2024-08-07T17:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"47e7e42e-7133-43ea-9d53-4694600f4ac1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T17:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"47e7e42e-7133-43ea-9d53-4694600f4ac1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6505 chars]
	I0807 17:56:10.184544    9640 round_trippers.go:463] GET https://172.28.235.211:8441/api/v1/nodes/functional-100700
	I0807 17:56:10.184544    9640 round_trippers.go:469] Request Headers:
	I0807 17:56:10.184544    9640 round_trippers.go:473]     Accept: application/json, */*
	I0807 17:56:10.184544    9640 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 17:56:10.189828    9640 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 17:56:10.189828    9640 round_trippers.go:577] Response Headers:
	I0807 17:56:10.190023    9640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 188f7f8b-bd84-432a-9372-91c329f39bfc
	I0807 17:56:10.190023    9640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 858f2d38-9d61-42b7-a235-674c73f98f65
	I0807 17:56:10.190023    9640 round_trippers.go:580]     Date: Wed, 07 Aug 2024 17:56:10 GMT
	I0807 17:56:10.190023    9640 round_trippers.go:580]     Audit-Id: 8ffa0b89-441c-497c-adee-392ae57c07f4
	I0807 17:56:10.190023    9640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 17:56:10.190023    9640 round_trippers.go:580]     Content-Type: application/json
	I0807 17:56:10.190150    9640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-100700","uid":"bf51b3aa-e152-4402-bf4d-a2293e078735","resourceVersion":"538","creationTimestamp":"2024-08-07T17:53:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-100700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"functional-100700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T17_53_24_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-08-07T17:53:20Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0807 17:56:10.679773    9640 round_trippers.go:463] GET https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-wwrwt
	I0807 17:56:10.679831    9640 round_trippers.go:469] Request Headers:
	I0807 17:56:10.679831    9640 round_trippers.go:473]     Accept: application/json, */*
	I0807 17:56:10.679906    9640 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 17:56:10.682639    9640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 17:56:10.682728    9640 round_trippers.go:577] Response Headers:
	I0807 17:56:10.682728    9640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 17:56:10.682728    9640 round_trippers.go:580]     Content-Type: application/json
	I0807 17:56:10.682728    9640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 188f7f8b-bd84-432a-9372-91c329f39bfc
	I0807 17:56:10.682728    9640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 858f2d38-9d61-42b7-a235-674c73f98f65
	I0807 17:56:10.682728    9640 round_trippers.go:580]     Date: Wed, 07 Aug 2024 17:56:10 GMT
	I0807 17:56:10.682728    9640 round_trippers.go:580]     Audit-Id: d1c1756f-e88f-4f58-a61e-ba3673bd5066
	I0807 17:56:10.683014    9640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-wwrwt","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5ddd5aa0-8aab-423c-855d-b8ea1633db28","resourceVersion":"607","creationTimestamp":"2024-08-07T17:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"47e7e42e-7133-43ea-9d53-4694600f4ac1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T17:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"47e7e42e-7133-43ea-9d53-4694600f4ac1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6452 chars]
	I0807 17:56:10.683547    9640 round_trippers.go:463] GET https://172.28.235.211:8441/api/v1/nodes/functional-100700
	I0807 17:56:10.683547    9640 round_trippers.go:469] Request Headers:
	I0807 17:56:10.683547    9640 round_trippers.go:473]     Accept: application/json, */*
	I0807 17:56:10.683547    9640 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 17:56:10.687347    9640 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 17:56:10.687347    9640 round_trippers.go:577] Response Headers:
	I0807 17:56:10.687347    9640 round_trippers.go:580]     Audit-Id: 4a70ec03-0570-4ee3-a147-e30eaeb074c0
	I0807 17:56:10.687347    9640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 17:56:10.687347    9640 round_trippers.go:580]     Content-Type: application/json
	I0807 17:56:10.687347    9640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 188f7f8b-bd84-432a-9372-91c329f39bfc
	I0807 17:56:10.687347    9640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 858f2d38-9d61-42b7-a235-674c73f98f65
	I0807 17:56:10.687347    9640 round_trippers.go:580]     Date: Wed, 07 Aug 2024 17:56:10 GMT
	I0807 17:56:10.687878    9640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-100700","uid":"bf51b3aa-e152-4402-bf4d-a2293e078735","resourceVersion":"538","creationTimestamp":"2024-08-07T17:53:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-100700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"functional-100700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T17_53_24_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-08-07T17:53:20Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0807 17:56:10.687985    9640 pod_ready.go:92] pod "coredns-7db6d8ff4d-wwrwt" in "kube-system" namespace has status "Ready":"True"
	I0807 17:56:10.687985    9640 pod_ready.go:81] duration metric: took 1.014451s for pod "coredns-7db6d8ff4d-wwrwt" in "kube-system" namespace to be "Ready" ...
	I0807 17:56:10.687985    9640 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-100700" in "kube-system" namespace to be "Ready" ...
	I0807 17:56:10.687985    9640 round_trippers.go:463] GET https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods/etcd-functional-100700
	I0807 17:56:10.688515    9640 round_trippers.go:469] Request Headers:
	I0807 17:56:10.688515    9640 round_trippers.go:473]     Accept: application/json, */*
	I0807 17:56:10.688515    9640 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 17:56:10.691292    9640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 17:56:10.691292    9640 round_trippers.go:577] Response Headers:
	I0807 17:56:10.691292    9640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 858f2d38-9d61-42b7-a235-674c73f98f65
	I0807 17:56:10.691292    9640 round_trippers.go:580]     Date: Wed, 07 Aug 2024 17:56:10 GMT
	I0807 17:56:10.691292    9640 round_trippers.go:580]     Audit-Id: 068a47f5-87c4-480d-a20e-c7163e627106
	I0807 17:56:10.691409    9640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 17:56:10.691409    9640 round_trippers.go:580]     Content-Type: application/json
	I0807 17:56:10.691430    9640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 188f7f8b-bd84-432a-9372-91c329f39bfc
	I0807 17:56:10.691643    9640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-100700","namespace":"kube-system","uid":"a5c7f27d-b19f-437f-a53a-2bf11d9ac9ca","resourceVersion":"565","creationTimestamp":"2024-08-07T17:53:24Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.28.235.211:2379","kubernetes.io/config.hash":"beb9bcbcae0a46c5b0c329e08dd8f948","kubernetes.io/config.mirror":"beb9bcbcae0a46c5b0c329e08dd8f948","kubernetes.io/config.seen":"2024-08-07T17:53:23.867528890Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-100700","uid":"bf51b3aa-e152-4402-bf4d-a2293e078735","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-07T17:53:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6608 chars]
	I0807 17:56:10.691809    9640 round_trippers.go:463] GET https://172.28.235.211:8441/api/v1/nodes/functional-100700
	I0807 17:56:10.691809    9640 round_trippers.go:469] Request Headers:
	I0807 17:56:10.691809    9640 round_trippers.go:473]     Accept: application/json, */*
	I0807 17:56:10.691809    9640 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 17:56:10.694386    9640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 17:56:10.694386    9640 round_trippers.go:577] Response Headers:
	I0807 17:56:10.694386    9640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 858f2d38-9d61-42b7-a235-674c73f98f65
	I0807 17:56:10.694386    9640 round_trippers.go:580]     Date: Wed, 07 Aug 2024 17:56:10 GMT
	I0807 17:56:10.694386    9640 round_trippers.go:580]     Audit-Id: 18b10282-abdf-4953-9152-efa816e3158d
	I0807 17:56:10.694386    9640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 17:56:10.694386    9640 round_trippers.go:580]     Content-Type: application/json
	I0807 17:56:10.694386    9640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 188f7f8b-bd84-432a-9372-91c329f39bfc
	I0807 17:56:10.694386    9640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-100700","uid":"bf51b3aa-e152-4402-bf4d-a2293e078735","resourceVersion":"538","creationTimestamp":"2024-08-07T17:53:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-100700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"functional-100700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T17_53_24_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-08-07T17:53:20Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0807 17:56:11.197417    9640 round_trippers.go:463] GET https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods/etcd-functional-100700
	I0807 17:56:11.197417    9640 round_trippers.go:469] Request Headers:
	I0807 17:56:11.197417    9640 round_trippers.go:473]     Accept: application/json, */*
	I0807 17:56:11.197417    9640 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 17:56:11.201002    9640 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 17:56:11.202047    9640 round_trippers.go:577] Response Headers:
	I0807 17:56:11.202093    9640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 17:56:11.202093    9640 round_trippers.go:580]     Content-Type: application/json
	I0807 17:56:11.202093    9640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 188f7f8b-bd84-432a-9372-91c329f39bfc
	I0807 17:56:11.202093    9640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 858f2d38-9d61-42b7-a235-674c73f98f65
	I0807 17:56:11.202093    9640 round_trippers.go:580]     Date: Wed, 07 Aug 2024 17:56:11 GMT
	I0807 17:56:11.202154    9640 round_trippers.go:580]     Audit-Id: ffd0a6b5-62df-4bc2-9200-016c0836e2b1
	I0807 17:56:11.202388    9640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-100700","namespace":"kube-system","uid":"a5c7f27d-b19f-437f-a53a-2bf11d9ac9ca","resourceVersion":"565","creationTimestamp":"2024-08-07T17:53:24Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.28.235.211:2379","kubernetes.io/config.hash":"beb9bcbcae0a46c5b0c329e08dd8f948","kubernetes.io/config.mirror":"beb9bcbcae0a46c5b0c329e08dd8f948","kubernetes.io/config.seen":"2024-08-07T17:53:23.867528890Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-100700","uid":"bf51b3aa-e152-4402-bf4d-a2293e078735","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-07T17:53:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6608 chars]
	I0807 17:56:11.203099    9640 round_trippers.go:463] GET https://172.28.235.211:8441/api/v1/nodes/functional-100700
	I0807 17:56:11.203099    9640 round_trippers.go:469] Request Headers:
	I0807 17:56:11.203099    9640 round_trippers.go:473]     Accept: application/json, */*
	I0807 17:56:11.203099    9640 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 17:56:11.211683    9640 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0807 17:56:11.211865    9640 round_trippers.go:577] Response Headers:
	I0807 17:56:11.211865    9640 round_trippers.go:580]     Date: Wed, 07 Aug 2024 17:56:11 GMT
	I0807 17:56:11.211865    9640 round_trippers.go:580]     Audit-Id: 09e3d658-f469-46ff-a18f-89848e19cd2c
	I0807 17:56:11.211865    9640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 17:56:11.211865    9640 round_trippers.go:580]     Content-Type: application/json
	I0807 17:56:11.211865    9640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 188f7f8b-bd84-432a-9372-91c329f39bfc
	I0807 17:56:11.211865    9640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 858f2d38-9d61-42b7-a235-674c73f98f65
	I0807 17:56:11.212244    9640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-100700","uid":"bf51b3aa-e152-4402-bf4d-a2293e078735","resourceVersion":"538","creationTimestamp":"2024-08-07T17:53:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-100700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"functional-100700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T17_53_24_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-08-07T17:53:20Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0807 17:56:11.693860    9640 round_trippers.go:463] GET https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods/etcd-functional-100700
	I0807 17:56:11.693860    9640 round_trippers.go:469] Request Headers:
	I0807 17:56:11.693860    9640 round_trippers.go:473]     Accept: application/json, */*
	I0807 17:56:11.693964    9640 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 17:56:11.700061    9640 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0807 17:56:11.700061    9640 round_trippers.go:577] Response Headers:
	I0807 17:56:11.700061    9640 round_trippers.go:580]     Audit-Id: e0dc969c-78db-4e14-89dd-112c852536c6
	I0807 17:56:11.700061    9640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 17:56:11.700061    9640 round_trippers.go:580]     Content-Type: application/json
	I0807 17:56:11.700061    9640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 188f7f8b-bd84-432a-9372-91c329f39bfc
	I0807 17:56:11.700061    9640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 858f2d38-9d61-42b7-a235-674c73f98f65
	I0807 17:56:11.700061    9640 round_trippers.go:580]     Date: Wed, 07 Aug 2024 17:56:11 GMT
	I0807 17:56:11.700061    9640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-100700","namespace":"kube-system","uid":"a5c7f27d-b19f-437f-a53a-2bf11d9ac9ca","resourceVersion":"565","creationTimestamp":"2024-08-07T17:53:24Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.28.235.211:2379","kubernetes.io/config.hash":"beb9bcbcae0a46c5b0c329e08dd8f948","kubernetes.io/config.mirror":"beb9bcbcae0a46c5b0c329e08dd8f948","kubernetes.io/config.seen":"2024-08-07T17:53:23.867528890Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-100700","uid":"bf51b3aa-e152-4402-bf4d-a2293e078735","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-07T17:53:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6608 chars]
	I0807 17:56:11.700869    9640 round_trippers.go:463] GET https://172.28.235.211:8441/api/v1/nodes/functional-100700
	I0807 17:56:11.700869    9640 round_trippers.go:469] Request Headers:
	I0807 17:56:11.700869    9640 round_trippers.go:473]     Accept: application/json, */*
	I0807 17:56:11.700869    9640 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 17:56:11.704300    9640 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 17:56:11.704300    9640 round_trippers.go:577] Response Headers:
	I0807 17:56:11.704300    9640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 188f7f8b-bd84-432a-9372-91c329f39bfc
	I0807 17:56:11.704300    9640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 858f2d38-9d61-42b7-a235-674c73f98f65
	I0807 17:56:11.704300    9640 round_trippers.go:580]     Date: Wed, 07 Aug 2024 17:56:11 GMT
	I0807 17:56:11.705319    9640 round_trippers.go:580]     Audit-Id: 8adf20cf-6422-494d-8b18-289248900ae3
	I0807 17:56:11.705319    9640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 17:56:11.705319    9640 round_trippers.go:580]     Content-Type: application/json
	I0807 17:56:11.705829    9640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-100700","uid":"bf51b3aa-e152-4402-bf4d-a2293e078735","resourceVersion":"538","creationTimestamp":"2024-08-07T17:53:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-100700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"functional-100700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T17_53_24_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-08-07T17:53:20Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0807 17:56:12.191810    9640 round_trippers.go:463] GET https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods/etcd-functional-100700
	I0807 17:56:12.191891    9640 round_trippers.go:469] Request Headers:
	I0807 17:56:12.191891    9640 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 17:56:12.191891    9640 round_trippers.go:473]     Accept: application/json, */*
	I0807 17:56:12.196350    9640 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 17:56:12.196859    9640 round_trippers.go:577] Response Headers:
	I0807 17:56:12.196859    9640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 17:56:12.196859    9640 round_trippers.go:580]     Content-Type: application/json
	I0807 17:56:12.196859    9640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 188f7f8b-bd84-432a-9372-91c329f39bfc
	I0807 17:56:12.196859    9640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 858f2d38-9d61-42b7-a235-674c73f98f65
	I0807 17:56:12.196859    9640 round_trippers.go:580]     Date: Wed, 07 Aug 2024 17:56:12 GMT
	I0807 17:56:12.196859    9640 round_trippers.go:580]     Audit-Id: 391437d9-51b8-4413-9852-b4d593a450dd
	I0807 17:56:12.197116    9640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-100700","namespace":"kube-system","uid":"a5c7f27d-b19f-437f-a53a-2bf11d9ac9ca","resourceVersion":"565","creationTimestamp":"2024-08-07T17:53:24Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.28.235.211:2379","kubernetes.io/config.hash":"beb9bcbcae0a46c5b0c329e08dd8f948","kubernetes.io/config.mirror":"beb9bcbcae0a46c5b0c329e08dd8f948","kubernetes.io/config.seen":"2024-08-07T17:53:23.867528890Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-100700","uid":"bf51b3aa-e152-4402-bf4d-a2293e078735","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-07T17:53:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6608 chars]
	I0807 17:56:12.198798    9640 round_trippers.go:463] GET https://172.28.235.211:8441/api/v1/nodes/functional-100700
	I0807 17:56:12.198798    9640 round_trippers.go:469] Request Headers:
	I0807 17:56:12.198798    9640 round_trippers.go:473]     Accept: application/json, */*
	I0807 17:56:12.198798    9640 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 17:56:12.201145    9640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 17:56:12.201145    9640 round_trippers.go:577] Response Headers:
	I0807 17:56:12.201145    9640 round_trippers.go:580]     Audit-Id: 270168e2-8f47-4f40-9b9d-81da793dcf04
	I0807 17:56:12.201145    9640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 17:56:12.201718    9640 round_trippers.go:580]     Content-Type: application/json
	I0807 17:56:12.201768    9640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 188f7f8b-bd84-432a-9372-91c329f39bfc
	I0807 17:56:12.201805    9640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 858f2d38-9d61-42b7-a235-674c73f98f65
	I0807 17:56:12.201805    9640 round_trippers.go:580]     Date: Wed, 07 Aug 2024 17:56:12 GMT
	I0807 17:56:12.202063    9640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-100700","uid":"bf51b3aa-e152-4402-bf4d-a2293e078735","resourceVersion":"538","creationTimestamp":"2024-08-07T17:53:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-100700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"functional-100700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T17_53_24_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-08-07T17:53:20Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0807 17:56:12.694279    9640 round_trippers.go:463] GET https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods/etcd-functional-100700
	I0807 17:56:12.694279    9640 round_trippers.go:469] Request Headers:
	I0807 17:56:12.694503    9640 round_trippers.go:473]     Accept: application/json, */*
	I0807 17:56:12.694503    9640 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 17:56:12.702506    9640 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0807 17:56:12.702506    9640 round_trippers.go:577] Response Headers:
	I0807 17:56:12.702506    9640 round_trippers.go:580]     Content-Type: application/json
	I0807 17:56:12.702506    9640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 188f7f8b-bd84-432a-9372-91c329f39bfc
	I0807 17:56:12.702506    9640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 858f2d38-9d61-42b7-a235-674c73f98f65
	I0807 17:56:12.702506    9640 round_trippers.go:580]     Date: Wed, 07 Aug 2024 17:56:12 GMT
	I0807 17:56:12.702506    9640 round_trippers.go:580]     Audit-Id: d6c8678e-e97f-40c8-8856-b1206dc4e38e
	I0807 17:56:12.702506    9640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 17:56:12.703075    9640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-100700","namespace":"kube-system","uid":"a5c7f27d-b19f-437f-a53a-2bf11d9ac9ca","resourceVersion":"565","creationTimestamp":"2024-08-07T17:53:24Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.28.235.211:2379","kubernetes.io/config.hash":"beb9bcbcae0a46c5b0c329e08dd8f948","kubernetes.io/config.mirror":"beb9bcbcae0a46c5b0c329e08dd8f948","kubernetes.io/config.seen":"2024-08-07T17:53:23.867528890Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-100700","uid":"bf51b3aa-e152-4402-bf4d-a2293e078735","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-07T17:53:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6608 chars]
	I0807 17:56:12.703204    9640 round_trippers.go:463] GET https://172.28.235.211:8441/api/v1/nodes/functional-100700
	I0807 17:56:12.703204    9640 round_trippers.go:469] Request Headers:
	I0807 17:56:12.703204    9640 round_trippers.go:473]     Accept: application/json, */*
	I0807 17:56:12.703204    9640 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 17:56:12.707176    9640 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 17:56:12.707176    9640 round_trippers.go:577] Response Headers:
	I0807 17:56:12.707176    9640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 858f2d38-9d61-42b7-a235-674c73f98f65
	I0807 17:56:12.707176    9640 round_trippers.go:580]     Date: Wed, 07 Aug 2024 17:56:12 GMT
	I0807 17:56:12.707176    9640 round_trippers.go:580]     Audit-Id: 85551d78-a868-498f-8fe5-d376512669c6
	I0807 17:56:12.707176    9640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 17:56:12.707176    9640 round_trippers.go:580]     Content-Type: application/json
	I0807 17:56:12.707176    9640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 188f7f8b-bd84-432a-9372-91c329f39bfc
	I0807 17:56:12.707757    9640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-100700","uid":"bf51b3aa-e152-4402-bf4d-a2293e078735","resourceVersion":"538","creationTimestamp":"2024-08-07T17:53:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-100700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"functional-100700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T17_53_24_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-08-07T17:53:20Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0807 17:56:12.707883    9640 pod_ready.go:102] pod "etcd-functional-100700" in "kube-system" namespace has status "Ready":"False"
	I0807 17:56:13.192645    9640 round_trippers.go:463] GET https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods/etcd-functional-100700
	I0807 17:56:13.192645    9640 round_trippers.go:469] Request Headers:
	I0807 17:56:13.192645    9640 round_trippers.go:473]     Accept: application/json, */*
	I0807 17:56:13.192645    9640 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 17:56:13.196244    9640 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 17:56:13.196244    9640 round_trippers.go:577] Response Headers:
	I0807 17:56:13.196831    9640 round_trippers.go:580]     Audit-Id: 310ad94b-dce2-4dbf-b8c6-d337550e6be0
	I0807 17:56:13.196831    9640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 17:56:13.196831    9640 round_trippers.go:580]     Content-Type: application/json
	I0807 17:56:13.196831    9640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 188f7f8b-bd84-432a-9372-91c329f39bfc
	I0807 17:56:13.196831    9640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 858f2d38-9d61-42b7-a235-674c73f98f65
	I0807 17:56:13.196831    9640 round_trippers.go:580]     Date: Wed, 07 Aug 2024 17:56:13 GMT
	I0807 17:56:13.197447    9640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-100700","namespace":"kube-system","uid":"a5c7f27d-b19f-437f-a53a-2bf11d9ac9ca","resourceVersion":"565","creationTimestamp":"2024-08-07T17:53:24Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.28.235.211:2379","kubernetes.io/config.hash":"beb9bcbcae0a46c5b0c329e08dd8f948","kubernetes.io/config.mirror":"beb9bcbcae0a46c5b0c329e08dd8f948","kubernetes.io/config.seen":"2024-08-07T17:53:23.867528890Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-100700","uid":"bf51b3aa-e152-4402-bf4d-a2293e078735","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-07T17:53:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6608 chars]
	I0807 17:56:13.198096    9640 round_trippers.go:463] GET https://172.28.235.211:8441/api/v1/nodes/functional-100700
	I0807 17:56:13.198096    9640 round_trippers.go:469] Request Headers:
	I0807 17:56:13.198096    9640 round_trippers.go:473]     Accept: application/json, */*
	I0807 17:56:13.198096    9640 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 17:56:13.201659    9640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 17:56:13.201721    9640 round_trippers.go:577] Response Headers:
	I0807 17:56:13.201721    9640 round_trippers.go:580]     Audit-Id: 743d1549-2f7a-43dd-971a-c6368791b460
	I0807 17:56:13.201721    9640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 17:56:13.201721    9640 round_trippers.go:580]     Content-Type: application/json
	I0807 17:56:13.201721    9640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 188f7f8b-bd84-432a-9372-91c329f39bfc
	I0807 17:56:13.201721    9640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 858f2d38-9d61-42b7-a235-674c73f98f65
	I0807 17:56:13.201721    9640 round_trippers.go:580]     Date: Wed, 07 Aug 2024 17:56:13 GMT
	I0807 17:56:13.202405    9640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-100700","uid":"bf51b3aa-e152-4402-bf4d-a2293e078735","resourceVersion":"538","creationTimestamp":"2024-08-07T17:53:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-100700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"functional-100700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T17_53_24_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-08-07T17:53:20Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0807 17:56:13.696550    9640 round_trippers.go:463] GET https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods/etcd-functional-100700
	I0807 17:56:13.696550    9640 round_trippers.go:469] Request Headers:
	I0807 17:56:13.696550    9640 round_trippers.go:473]     Accept: application/json, */*
	I0807 17:56:13.696550    9640 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 17:56:13.701089    9640 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 17:56:13.701156    9640 round_trippers.go:577] Response Headers:
	I0807 17:56:13.701156    9640 round_trippers.go:580]     Date: Wed, 07 Aug 2024 17:56:13 GMT
	I0807 17:56:13.701156    9640 round_trippers.go:580]     Audit-Id: f22189fe-0606-4067-b615-ae8a676e3b56
	I0807 17:56:13.701156    9640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 17:56:13.701156    9640 round_trippers.go:580]     Content-Type: application/json
	I0807 17:56:13.701156    9640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 188f7f8b-bd84-432a-9372-91c329f39bfc
	I0807 17:56:13.701156    9640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 858f2d38-9d61-42b7-a235-674c73f98f65
	I0807 17:56:13.701156    9640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-100700","namespace":"kube-system","uid":"a5c7f27d-b19f-437f-a53a-2bf11d9ac9ca","resourceVersion":"565","creationTimestamp":"2024-08-07T17:53:24Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.28.235.211:2379","kubernetes.io/config.hash":"beb9bcbcae0a46c5b0c329e08dd8f948","kubernetes.io/config.mirror":"beb9bcbcae0a46c5b0c329e08dd8f948","kubernetes.io/config.seen":"2024-08-07T17:53:23.867528890Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-100700","uid":"bf51b3aa-e152-4402-bf4d-a2293e078735","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-07T17:53:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6608 chars]
	I0807 17:56:13.702467    9640 round_trippers.go:463] GET https://172.28.235.211:8441/api/v1/nodes/functional-100700
	I0807 17:56:13.702467    9640 round_trippers.go:469] Request Headers:
	I0807 17:56:13.702529    9640 round_trippers.go:473]     Accept: application/json, */*
	I0807 17:56:13.702529    9640 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 17:56:13.704811    9640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 17:56:13.704811    9640 round_trippers.go:577] Response Headers:
	I0807 17:56:13.704811    9640 round_trippers.go:580]     Audit-Id: f7710d7a-25b9-4ed7-afde-4b71e2220a94
	I0807 17:56:13.704811    9640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 17:56:13.704811    9640 round_trippers.go:580]     Content-Type: application/json
	I0807 17:56:13.704811    9640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 188f7f8b-bd84-432a-9372-91c329f39bfc
	I0807 17:56:13.704811    9640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 858f2d38-9d61-42b7-a235-674c73f98f65
	I0807 17:56:13.704811    9640 round_trippers.go:580]     Date: Wed, 07 Aug 2024 17:56:13 GMT
	I0807 17:56:13.706027    9640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-100700","uid":"bf51b3aa-e152-4402-bf4d-a2293e078735","resourceVersion":"538","creationTimestamp":"2024-08-07T17:53:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-100700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"functional-100700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T17_53_24_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-08-07T17:53:20Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0807 17:56:14.198693    9640 round_trippers.go:463] GET https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods/etcd-functional-100700
	I0807 17:56:14.198693    9640 round_trippers.go:469] Request Headers:
	I0807 17:56:14.198772    9640 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 17:56:14.198772    9640 round_trippers.go:473]     Accept: application/json, */*
	I0807 17:56:14.202871    9640 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 17:56:14.203013    9640 round_trippers.go:577] Response Headers:
	I0807 17:56:14.203098    9640 round_trippers.go:580]     Content-Type: application/json
	I0807 17:56:14.203098    9640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 188f7f8b-bd84-432a-9372-91c329f39bfc
	I0807 17:56:14.203098    9640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 858f2d38-9d61-42b7-a235-674c73f98f65
	I0807 17:56:14.203098    9640 round_trippers.go:580]     Date: Wed, 07 Aug 2024 17:56:14 GMT
	I0807 17:56:14.203098    9640 round_trippers.go:580]     Audit-Id: b8389e90-7072-4913-905a-953132ab3bbc
	I0807 17:56:14.203098    9640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 17:56:14.203098    9640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-100700","namespace":"kube-system","uid":"a5c7f27d-b19f-437f-a53a-2bf11d9ac9ca","resourceVersion":"609","creationTimestamp":"2024-08-07T17:53:24Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.28.235.211:2379","kubernetes.io/config.hash":"beb9bcbcae0a46c5b0c329e08dd8f948","kubernetes.io/config.mirror":"beb9bcbcae0a46c5b0c329e08dd8f948","kubernetes.io/config.seen":"2024-08-07T17:53:23.867528890Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-100700","uid":"bf51b3aa-e152-4402-bf4d-a2293e078735","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-07T17:53:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6384 chars]
	I0807 17:56:14.203699    9640 round_trippers.go:463] GET https://172.28.235.211:8441/api/v1/nodes/functional-100700
	I0807 17:56:14.203699    9640 round_trippers.go:469] Request Headers:
	I0807 17:56:14.203699    9640 round_trippers.go:473]     Accept: application/json, */*
	I0807 17:56:14.203699    9640 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 17:56:14.206499    9640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 17:56:14.206499    9640 round_trippers.go:577] Response Headers:
	I0807 17:56:14.207326    9640 round_trippers.go:580]     Audit-Id: bff174e6-9423-4646-a368-b5ddd4ed5656
	I0807 17:56:14.207326    9640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 17:56:14.207326    9640 round_trippers.go:580]     Content-Type: application/json
	I0807 17:56:14.207326    9640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 188f7f8b-bd84-432a-9372-91c329f39bfc
	I0807 17:56:14.207326    9640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 858f2d38-9d61-42b7-a235-674c73f98f65
	I0807 17:56:14.207326    9640 round_trippers.go:580]     Date: Wed, 07 Aug 2024 17:56:14 GMT
	I0807 17:56:14.207628    9640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-100700","uid":"bf51b3aa-e152-4402-bf4d-a2293e078735","resourceVersion":"538","creationTimestamp":"2024-08-07T17:53:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-100700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"functional-100700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T17_53_24_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-08-07T17:53:20Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0807 17:56:14.207805    9640 pod_ready.go:92] pod "etcd-functional-100700" in "kube-system" namespace has status "Ready":"True"
	I0807 17:56:14.207805    9640 pod_ready.go:81] duration metric: took 3.5197748s for pod "etcd-functional-100700" in "kube-system" namespace to be "Ready" ...
	I0807 17:56:14.207805    9640 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-100700" in "kube-system" namespace to be "Ready" ...
	I0807 17:56:14.208331    9640 round_trippers.go:463] GET https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-100700
	I0807 17:56:14.208331    9640 round_trippers.go:469] Request Headers:
	I0807 17:56:14.208331    9640 round_trippers.go:473]     Accept: application/json, */*
	I0807 17:56:14.208493    9640 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 17:56:14.211206    9640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 17:56:14.212212    9640 round_trippers.go:577] Response Headers:
	I0807 17:56:14.212212    9640 round_trippers.go:580]     Audit-Id: b4ac461f-681b-48dd-a66c-7eaac01ae84f
	I0807 17:56:14.212212    9640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 17:56:14.212212    9640 round_trippers.go:580]     Content-Type: application/json
	I0807 17:56:14.212212    9640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 188f7f8b-bd84-432a-9372-91c329f39bfc
	I0807 17:56:14.212212    9640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 858f2d38-9d61-42b7-a235-674c73f98f65
	I0807 17:56:14.212212    9640 round_trippers.go:580]     Date: Wed, 07 Aug 2024 17:56:14 GMT
	I0807 17:56:14.212686    9640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-100700","namespace":"kube-system","uid":"5249a821-fdb0-4a53-9e1e-ff9336ba130f","resourceVersion":"568","creationTimestamp":"2024-08-07T17:53:22Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.28.235.211:8441","kubernetes.io/config.hash":"00b3db9060a30b06edb713820a5caeb5","kubernetes.io/config.mirror":"00b3db9060a30b06edb713820a5caeb5","kubernetes.io/config.seen":"2024-08-07T17:53:15.631450444Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-100700","uid":"bf51b3aa-e152-4402-bf4d-a2293e078735","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-07T17:53:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.
kubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernet [truncated 8158 chars]
	I0807 17:56:14.213485    9640 round_trippers.go:463] GET https://172.28.235.211:8441/api/v1/nodes/functional-100700
	I0807 17:56:14.213667    9640 round_trippers.go:469] Request Headers:
	I0807 17:56:14.213736    9640 round_trippers.go:473]     Accept: application/json, */*
	I0807 17:56:14.213778    9640 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 17:56:14.216794    9640 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 17:56:14.216794    9640 round_trippers.go:577] Response Headers:
	I0807 17:56:14.216794    9640 round_trippers.go:580]     Content-Type: application/json
	I0807 17:56:14.216794    9640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 188f7f8b-bd84-432a-9372-91c329f39bfc
	I0807 17:56:14.217302    9640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 858f2d38-9d61-42b7-a235-674c73f98f65
	I0807 17:56:14.217302    9640 round_trippers.go:580]     Date: Wed, 07 Aug 2024 17:56:14 GMT
	I0807 17:56:14.217302    9640 round_trippers.go:580]     Audit-Id: 6645223d-14a0-4af2-961d-70f8c488e999
	I0807 17:56:14.217302    9640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 17:56:14.217555    9640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-100700","uid":"bf51b3aa-e152-4402-bf4d-a2293e078735","resourceVersion":"538","creationTimestamp":"2024-08-07T17:53:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-100700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"functional-100700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T17_53_24_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-08-07T17:53:20Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0807 17:56:14.712727    9640 round_trippers.go:463] GET https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-100700
	I0807 17:56:14.712727    9640 round_trippers.go:469] Request Headers:
	I0807 17:56:14.712727    9640 round_trippers.go:473]     Accept: application/json, */*
	I0807 17:56:14.712727    9640 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 17:56:14.718540    9640 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 17:56:14.718540    9640 round_trippers.go:577] Response Headers:
	I0807 17:56:14.718540    9640 round_trippers.go:580]     Date: Wed, 07 Aug 2024 17:56:14 GMT
	I0807 17:56:14.718540    9640 round_trippers.go:580]     Audit-Id: 02dea21d-4bc1-4dc4-b88b-7592b339bf54
	I0807 17:56:14.718540    9640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 17:56:14.718540    9640 round_trippers.go:580]     Content-Type: application/json
	I0807 17:56:14.718540    9640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 188f7f8b-bd84-432a-9372-91c329f39bfc
	I0807 17:56:14.718540    9640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 858f2d38-9d61-42b7-a235-674c73f98f65
	I0807 17:56:14.718540    9640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-100700","namespace":"kube-system","uid":"5249a821-fdb0-4a53-9e1e-ff9336ba130f","resourceVersion":"568","creationTimestamp":"2024-08-07T17:53:22Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.28.235.211:8441","kubernetes.io/config.hash":"00b3db9060a30b06edb713820a5caeb5","kubernetes.io/config.mirror":"00b3db9060a30b06edb713820a5caeb5","kubernetes.io/config.seen":"2024-08-07T17:53:15.631450444Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-100700","uid":"bf51b3aa-e152-4402-bf4d-a2293e078735","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-07T17:53:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.
kubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernet [truncated 8158 chars]
	I0807 17:56:14.719824    9640 round_trippers.go:463] GET https://172.28.235.211:8441/api/v1/nodes/functional-100700
	I0807 17:56:14.719883    9640 round_trippers.go:469] Request Headers:
	I0807 17:56:14.719883    9640 round_trippers.go:473]     Accept: application/json, */*
	I0807 17:56:14.719883    9640 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 17:56:14.722147    9640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 17:56:14.722779    9640 round_trippers.go:577] Response Headers:
	I0807 17:56:14.722779    9640 round_trippers.go:580]     Audit-Id: ddd562b5-1d6e-4b29-8892-888bf37e00c8
	I0807 17:56:14.722779    9640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 17:56:14.722779    9640 round_trippers.go:580]     Content-Type: application/json
	I0807 17:56:14.722779    9640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 188f7f8b-bd84-432a-9372-91c329f39bfc
	I0807 17:56:14.722779    9640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 858f2d38-9d61-42b7-a235-674c73f98f65
	I0807 17:56:14.722779    9640 round_trippers.go:580]     Date: Wed, 07 Aug 2024 17:56:14 GMT
	I0807 17:56:14.723080    9640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-100700","uid":"bf51b3aa-e152-4402-bf4d-a2293e078735","resourceVersion":"538","creationTimestamp":"2024-08-07T17:53:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-100700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"functional-100700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T17_53_24_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-08-07T17:53:20Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0807 17:56:15.222429    9640 round_trippers.go:463] GET https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-100700
	I0807 17:56:15.222429    9640 round_trippers.go:469] Request Headers:
	I0807 17:56:15.222429    9640 round_trippers.go:473]     Accept: application/json, */*
	I0807 17:56:15.222429    9640 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 17:56:15.226062    9640 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 17:56:15.226062    9640 round_trippers.go:577] Response Headers:
	I0807 17:56:15.226062    9640 round_trippers.go:580]     Date: Wed, 07 Aug 2024 17:56:15 GMT
	I0807 17:56:15.226062    9640 round_trippers.go:580]     Audit-Id: 51341666-4fb4-4675-8ccd-98a8c7753264
	I0807 17:56:15.226062    9640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 17:56:15.226062    9640 round_trippers.go:580]     Content-Type: application/json
	I0807 17:56:15.226062    9640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 188f7f8b-bd84-432a-9372-91c329f39bfc
	I0807 17:56:15.226062    9640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 858f2d38-9d61-42b7-a235-674c73f98f65
	I0807 17:56:15.226615    9640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-100700","namespace":"kube-system","uid":"5249a821-fdb0-4a53-9e1e-ff9336ba130f","resourceVersion":"568","creationTimestamp":"2024-08-07T17:53:22Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.28.235.211:8441","kubernetes.io/config.hash":"00b3db9060a30b06edb713820a5caeb5","kubernetes.io/config.mirror":"00b3db9060a30b06edb713820a5caeb5","kubernetes.io/config.seen":"2024-08-07T17:53:15.631450444Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-100700","uid":"bf51b3aa-e152-4402-bf4d-a2293e078735","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-07T17:53:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.
kubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernet [truncated 8158 chars]
	I0807 17:56:15.227847    9640 round_trippers.go:463] GET https://172.28.235.211:8441/api/v1/nodes/functional-100700
	I0807 17:56:15.227911    9640 round_trippers.go:469] Request Headers:
	I0807 17:56:15.227911    9640 round_trippers.go:473]     Accept: application/json, */*
	I0807 17:56:15.227911    9640 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 17:56:15.230185    9640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 17:56:15.230185    9640 round_trippers.go:577] Response Headers:
	I0807 17:56:15.230185    9640 round_trippers.go:580]     Date: Wed, 07 Aug 2024 17:56:15 GMT
	I0807 17:56:15.230185    9640 round_trippers.go:580]     Audit-Id: 8d515f70-ee72-46b2-99f0-cfbda16820e2
	I0807 17:56:15.230185    9640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 17:56:15.230185    9640 round_trippers.go:580]     Content-Type: application/json
	I0807 17:56:15.230185    9640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 188f7f8b-bd84-432a-9372-91c329f39bfc
	I0807 17:56:15.230185    9640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 858f2d38-9d61-42b7-a235-674c73f98f65
	I0807 17:56:15.231421    9640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-100700","uid":"bf51b3aa-e152-4402-bf4d-a2293e078735","resourceVersion":"538","creationTimestamp":"2024-08-07T17:53:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-100700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"functional-100700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T17_53_24_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-08-07T17:53:20Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0807 17:56:15.708761    9640 round_trippers.go:463] GET https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-100700
	I0807 17:56:15.708761    9640 round_trippers.go:469] Request Headers:
	I0807 17:56:15.708873    9640 round_trippers.go:473]     Accept: application/json, */*
	I0807 17:56:15.708873    9640 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 17:56:15.714084    9640 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 17:56:15.714242    9640 round_trippers.go:577] Response Headers:
	I0807 17:56:15.714242    9640 round_trippers.go:580]     Audit-Id: b6b9edd7-7884-4b05-9f73-cf61cd052a90
	I0807 17:56:15.714242    9640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 17:56:15.714242    9640 round_trippers.go:580]     Content-Type: application/json
	I0807 17:56:15.714242    9640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 188f7f8b-bd84-432a-9372-91c329f39bfc
	I0807 17:56:15.714328    9640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 858f2d38-9d61-42b7-a235-674c73f98f65
	I0807 17:56:15.714328    9640 round_trippers.go:580]     Date: Wed, 07 Aug 2024 17:56:15 GMT
	I0807 17:56:15.715114    9640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-100700","namespace":"kube-system","uid":"5249a821-fdb0-4a53-9e1e-ff9336ba130f","resourceVersion":"568","creationTimestamp":"2024-08-07T17:53:22Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.28.235.211:8441","kubernetes.io/config.hash":"00b3db9060a30b06edb713820a5caeb5","kubernetes.io/config.mirror":"00b3db9060a30b06edb713820a5caeb5","kubernetes.io/config.seen":"2024-08-07T17:53:15.631450444Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-100700","uid":"bf51b3aa-e152-4402-bf4d-a2293e078735","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-07T17:53:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.
kubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernet [truncated 8158 chars]
	I0807 17:56:15.716188    9640 round_trippers.go:463] GET https://172.28.235.211:8441/api/v1/nodes/functional-100700
	I0807 17:56:15.716340    9640 round_trippers.go:469] Request Headers:
	I0807 17:56:15.716340    9640 round_trippers.go:473]     Accept: application/json, */*
	I0807 17:56:15.716340    9640 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 17:56:15.719227    9640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 17:56:15.719227    9640 round_trippers.go:577] Response Headers:
	I0807 17:56:15.720231    9640 round_trippers.go:580]     Content-Type: application/json
	I0807 17:56:15.720256    9640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 188f7f8b-bd84-432a-9372-91c329f39bfc
	I0807 17:56:15.720256    9640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 858f2d38-9d61-42b7-a235-674c73f98f65
	I0807 17:56:15.720256    9640 round_trippers.go:580]     Date: Wed, 07 Aug 2024 17:56:15 GMT
	I0807 17:56:15.720256    9640 round_trippers.go:580]     Audit-Id: bd8ea6e4-2f24-4b0c-8ff2-9549065d98d8
	I0807 17:56:15.720256    9640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 17:56:15.720524    9640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-100700","uid":"bf51b3aa-e152-4402-bf4d-a2293e078735","resourceVersion":"538","creationTimestamp":"2024-08-07T17:53:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-100700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"functional-100700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T17_53_24_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-08-07T17:53:20Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0807 17:56:16.223448    9640 round_trippers.go:463] GET https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-100700
	I0807 17:56:16.223448    9640 round_trippers.go:469] Request Headers:
	I0807 17:56:16.223519    9640 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 17:56:16.223519    9640 round_trippers.go:473]     Accept: application/json, */*
	I0807 17:56:16.226850    9640 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 17:56:16.227650    9640 round_trippers.go:577] Response Headers:
	I0807 17:56:16.227650    9640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 188f7f8b-bd84-432a-9372-91c329f39bfc
	I0807 17:56:16.227650    9640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 858f2d38-9d61-42b7-a235-674c73f98f65
	I0807 17:56:16.227650    9640 round_trippers.go:580]     Date: Wed, 07 Aug 2024 17:56:16 GMT
	I0807 17:56:16.227650    9640 round_trippers.go:580]     Audit-Id: 12182003-d0bd-4d68-ad13-52600287013b
	I0807 17:56:16.227650    9640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 17:56:16.227650    9640 round_trippers.go:580]     Content-Type: application/json
	I0807 17:56:16.227650    9640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-100700","namespace":"kube-system","uid":"5249a821-fdb0-4a53-9e1e-ff9336ba130f","resourceVersion":"611","creationTimestamp":"2024-08-07T17:53:22Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.28.235.211:8441","kubernetes.io/config.hash":"00b3db9060a30b06edb713820a5caeb5","kubernetes.io/config.mirror":"00b3db9060a30b06edb713820a5caeb5","kubernetes.io/config.seen":"2024-08-07T17:53:15.631450444Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-100700","uid":"bf51b3aa-e152-4402-bf4d-a2293e078735","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-07T17:53:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.
kubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernet [truncated 7914 chars]
	I0807 17:56:16.228490    9640 round_trippers.go:463] GET https://172.28.235.211:8441/api/v1/nodes/functional-100700
	I0807 17:56:16.228490    9640 round_trippers.go:469] Request Headers:
	I0807 17:56:16.228490    9640 round_trippers.go:473]     Accept: application/json, */*
	I0807 17:56:16.228490    9640 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 17:56:16.230964    9640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 17:56:16.230964    9640 round_trippers.go:577] Response Headers:
	I0807 17:56:16.230964    9640 round_trippers.go:580]     Content-Type: application/json
	I0807 17:56:16.230964    9640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 188f7f8b-bd84-432a-9372-91c329f39bfc
	I0807 17:56:16.230964    9640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 858f2d38-9d61-42b7-a235-674c73f98f65
	I0807 17:56:16.231984    9640 round_trippers.go:580]     Date: Wed, 07 Aug 2024 17:56:16 GMT
	I0807 17:56:16.231984    9640 round_trippers.go:580]     Audit-Id: 32da495b-19f8-44c0-92c9-58073f11cc99
	I0807 17:56:16.231984    9640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 17:56:16.232199    9640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-100700","uid":"bf51b3aa-e152-4402-bf4d-a2293e078735","resourceVersion":"538","creationTimestamp":"2024-08-07T17:53:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-100700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"functional-100700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T17_53_24_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-08-07T17:53:20Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0807 17:56:16.232238    9640 pod_ready.go:92] pod "kube-apiserver-functional-100700" in "kube-system" namespace has status "Ready":"True"
	I0807 17:56:16.232238    9640 pod_ready.go:81] duration metric: took 2.0244075s for pod "kube-apiserver-functional-100700" in "kube-system" namespace to be "Ready" ...
	I0807 17:56:16.232238    9640 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-100700" in "kube-system" namespace to be "Ready" ...
	I0807 17:56:16.232238    9640 round_trippers.go:463] GET https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-100700
	I0807 17:56:16.232774    9640 round_trippers.go:469] Request Headers:
	I0807 17:56:16.232832    9640 round_trippers.go:473]     Accept: application/json, */*
	I0807 17:56:16.232832    9640 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 17:56:16.238089    9640 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 17:56:16.238502    9640 round_trippers.go:577] Response Headers:
	I0807 17:56:16.238502    9640 round_trippers.go:580]     Audit-Id: fb07dd2e-784d-454f-b25e-9584655ab7be
	I0807 17:56:16.238502    9640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 17:56:16.238502    9640 round_trippers.go:580]     Content-Type: application/json
	I0807 17:56:16.238502    9640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 188f7f8b-bd84-432a-9372-91c329f39bfc
	I0807 17:56:16.238602    9640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 858f2d38-9d61-42b7-a235-674c73f98f65
	I0807 17:56:16.238602    9640 round_trippers.go:580]     Date: Wed, 07 Aug 2024 17:56:16 GMT
	I0807 17:56:16.238849    9640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-100700","namespace":"kube-system","uid":"1f265aaa-5e7b-41b5-9fa6-ce6a907b24a9","resourceVersion":"571","creationTimestamp":"2024-08-07T17:53:24Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e0701ecea101733c14207f9cb54d1dbe","kubernetes.io/config.mirror":"e0701ecea101733c14207f9cb54d1dbe","kubernetes.io/config.seen":"2024-08-07T17:53:23.867537490Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-100700","uid":"bf51b3aa-e152-4402-bf4d-a2293e078735","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-07T17:53:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 7739 chars]
	I0807 17:56:16.239605    9640 round_trippers.go:463] GET https://172.28.235.211:8441/api/v1/nodes/functional-100700
	I0807 17:56:16.239684    9640 round_trippers.go:469] Request Headers:
	I0807 17:56:16.239684    9640 round_trippers.go:473]     Accept: application/json, */*
	I0807 17:56:16.239684    9640 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 17:56:16.245264    9640 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 17:56:16.245611    9640 round_trippers.go:577] Response Headers:
	I0807 17:56:16.245611    9640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 188f7f8b-bd84-432a-9372-91c329f39bfc
	I0807 17:56:16.245611    9640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 858f2d38-9d61-42b7-a235-674c73f98f65
	I0807 17:56:16.245611    9640 round_trippers.go:580]     Date: Wed, 07 Aug 2024 17:56:16 GMT
	I0807 17:56:16.245611    9640 round_trippers.go:580]     Audit-Id: eb20a4a9-4773-4cf6-b5e7-e548654ab953
	I0807 17:56:16.245611    9640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 17:56:16.245611    9640 round_trippers.go:580]     Content-Type: application/json
	I0807 17:56:16.245952    9640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-100700","uid":"bf51b3aa-e152-4402-bf4d-a2293e078735","resourceVersion":"538","creationTimestamp":"2024-08-07T17:53:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-100700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"functional-100700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T17_53_24_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-08-07T17:53:20Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0807 17:56:16.737485    9640 round_trippers.go:463] GET https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-100700
	I0807 17:56:16.737485    9640 round_trippers.go:469] Request Headers:
	I0807 17:56:16.737485    9640 round_trippers.go:473]     Accept: application/json, */*
	I0807 17:56:16.737485    9640 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 17:56:16.741318    9640 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 17:56:16.741621    9640 round_trippers.go:577] Response Headers:
	I0807 17:56:16.741621    9640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 17:56:16.741621    9640 round_trippers.go:580]     Content-Type: application/json
	I0807 17:56:16.741621    9640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 188f7f8b-bd84-432a-9372-91c329f39bfc
	I0807 17:56:16.741796    9640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 858f2d38-9d61-42b7-a235-674c73f98f65
	I0807 17:56:16.741821    9640 round_trippers.go:580]     Date: Wed, 07 Aug 2024 17:56:16 GMT
	I0807 17:56:16.741821    9640 round_trippers.go:580]     Audit-Id: edf26b3c-2bb1-4219-83d6-91b45dee3f85
	I0807 17:56:16.742371    9640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-100700","namespace":"kube-system","uid":"1f265aaa-5e7b-41b5-9fa6-ce6a907b24a9","resourceVersion":"571","creationTimestamp":"2024-08-07T17:53:24Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e0701ecea101733c14207f9cb54d1dbe","kubernetes.io/config.mirror":"e0701ecea101733c14207f9cb54d1dbe","kubernetes.io/config.seen":"2024-08-07T17:53:23.867537490Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-100700","uid":"bf51b3aa-e152-4402-bf4d-a2293e078735","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-07T17:53:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 7739 chars]
	I0807 17:56:16.743200    9640 round_trippers.go:463] GET https://172.28.235.211:8441/api/v1/nodes/functional-100700
	I0807 17:56:16.743200    9640 round_trippers.go:469] Request Headers:
	I0807 17:56:16.743200    9640 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 17:56:16.743200    9640 round_trippers.go:473]     Accept: application/json, */*
	I0807 17:56:16.745436    9640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 17:56:16.745436    9640 round_trippers.go:577] Response Headers:
	I0807 17:56:16.745820    9640 round_trippers.go:580]     Audit-Id: bed7b77f-bf8d-43a8-a453-ed88806d6d5d
	I0807 17:56:16.745820    9640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 17:56:16.745820    9640 round_trippers.go:580]     Content-Type: application/json
	I0807 17:56:16.745820    9640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 188f7f8b-bd84-432a-9372-91c329f39bfc
	I0807 17:56:16.745820    9640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 858f2d38-9d61-42b7-a235-674c73f98f65
	I0807 17:56:16.745820    9640 round_trippers.go:580]     Date: Wed, 07 Aug 2024 17:56:16 GMT
	I0807 17:56:16.746163    9640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-100700","uid":"bf51b3aa-e152-4402-bf4d-a2293e078735","resourceVersion":"538","creationTimestamp":"2024-08-07T17:53:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-100700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"functional-100700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T17_53_24_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-08-07T17:53:20Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0807 17:56:17.238420    9640 round_trippers.go:463] GET https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-100700
	I0807 17:56:17.238620    9640 round_trippers.go:469] Request Headers:
	I0807 17:56:17.238620    9640 round_trippers.go:473]     Accept: application/json, */*
	I0807 17:56:17.238620    9640 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 17:56:17.241925    9640 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 17:56:17.241925    9640 round_trippers.go:577] Response Headers:
	I0807 17:56:17.242665    9640 round_trippers.go:580]     Date: Wed, 07 Aug 2024 17:56:17 GMT
	I0807 17:56:17.242665    9640 round_trippers.go:580]     Audit-Id: 1965eeca-619d-4945-8f88-c350236d95c6
	I0807 17:56:17.242665    9640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 17:56:17.242665    9640 round_trippers.go:580]     Content-Type: application/json
	I0807 17:56:17.242719    9640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 188f7f8b-bd84-432a-9372-91c329f39bfc
	I0807 17:56:17.242719    9640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 858f2d38-9d61-42b7-a235-674c73f98f65
	I0807 17:56:17.242719    9640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-100700","namespace":"kube-system","uid":"1f265aaa-5e7b-41b5-9fa6-ce6a907b24a9","resourceVersion":"571","creationTimestamp":"2024-08-07T17:53:24Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e0701ecea101733c14207f9cb54d1dbe","kubernetes.io/config.mirror":"e0701ecea101733c14207f9cb54d1dbe","kubernetes.io/config.seen":"2024-08-07T17:53:23.867537490Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-100700","uid":"bf51b3aa-e152-4402-bf4d-a2293e078735","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-07T17:53:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 7739 chars]
	I0807 17:56:17.243515    9640 round_trippers.go:463] GET https://172.28.235.211:8441/api/v1/nodes/functional-100700
	I0807 17:56:17.243515    9640 round_trippers.go:469] Request Headers:
	I0807 17:56:17.243515    9640 round_trippers.go:473]     Accept: application/json, */*
	I0807 17:56:17.243515    9640 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 17:56:17.245863    9640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 17:56:17.246851    9640 round_trippers.go:577] Response Headers:
	I0807 17:56:17.246851    9640 round_trippers.go:580]     Content-Type: application/json
	I0807 17:56:17.246851    9640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 188f7f8b-bd84-432a-9372-91c329f39bfc
	I0807 17:56:17.246851    9640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 858f2d38-9d61-42b7-a235-674c73f98f65
	I0807 17:56:17.246892    9640 round_trippers.go:580]     Date: Wed, 07 Aug 2024 17:56:17 GMT
	I0807 17:56:17.246892    9640 round_trippers.go:580]     Audit-Id: 1144a6c7-56bd-47e5-a378-efd9576559f1
	I0807 17:56:17.246892    9640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 17:56:17.247093    9640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-100700","uid":"bf51b3aa-e152-4402-bf4d-a2293e078735","resourceVersion":"538","creationTimestamp":"2024-08-07T17:53:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-100700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"functional-100700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T17_53_24_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-08-07T17:53:20Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0807 17:56:17.734820    9640 round_trippers.go:463] GET https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-100700
	I0807 17:56:17.734870    9640 round_trippers.go:469] Request Headers:
	I0807 17:56:17.734870    9640 round_trippers.go:473]     Accept: application/json, */*
	I0807 17:56:17.734870    9640 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 17:56:17.739495    9640 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 17:56:17.739495    9640 round_trippers.go:577] Response Headers:
	I0807 17:56:17.740498    9640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 188f7f8b-bd84-432a-9372-91c329f39bfc
	I0807 17:56:17.740582    9640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 858f2d38-9d61-42b7-a235-674c73f98f65
	I0807 17:56:17.740582    9640 round_trippers.go:580]     Date: Wed, 07 Aug 2024 17:56:17 GMT
	I0807 17:56:17.740582    9640 round_trippers.go:580]     Audit-Id: e3e41b1f-c81e-484c-8519-d5b9413166bd
	I0807 17:56:17.740582    9640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 17:56:17.740582    9640 round_trippers.go:580]     Content-Type: application/json
	I0807 17:56:17.741484    9640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-100700","namespace":"kube-system","uid":"1f265aaa-5e7b-41b5-9fa6-ce6a907b24a9","resourceVersion":"571","creationTimestamp":"2024-08-07T17:53:24Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e0701ecea101733c14207f9cb54d1dbe","kubernetes.io/config.mirror":"e0701ecea101733c14207f9cb54d1dbe","kubernetes.io/config.seen":"2024-08-07T17:53:23.867537490Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-100700","uid":"bf51b3aa-e152-4402-bf4d-a2293e078735","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-07T17:53:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 7739 chars]
	I0807 17:56:17.742382    9640 round_trippers.go:463] GET https://172.28.235.211:8441/api/v1/nodes/functional-100700
	I0807 17:56:17.742437    9640 round_trippers.go:469] Request Headers:
	I0807 17:56:17.742437    9640 round_trippers.go:473]     Accept: application/json, */*
	I0807 17:56:17.742437    9640 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 17:56:17.745342    9640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 17:56:17.745342    9640 round_trippers.go:577] Response Headers:
	I0807 17:56:17.745342    9640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 858f2d38-9d61-42b7-a235-674c73f98f65
	I0807 17:56:17.745342    9640 round_trippers.go:580]     Date: Wed, 07 Aug 2024 17:56:17 GMT
	I0807 17:56:17.745342    9640 round_trippers.go:580]     Audit-Id: 5af86b51-2424-492c-9c2e-7e1a066be958
	I0807 17:56:17.745342    9640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 17:56:17.745342    9640 round_trippers.go:580]     Content-Type: application/json
	I0807 17:56:17.745342    9640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 188f7f8b-bd84-432a-9372-91c329f39bfc
	I0807 17:56:17.746449    9640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-100700","uid":"bf51b3aa-e152-4402-bf4d-a2293e078735","resourceVersion":"538","creationTimestamp":"2024-08-07T17:53:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-100700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"functional-100700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T17_53_24_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-08-07T17:53:20Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0807 17:56:18.233117    9640 round_trippers.go:463] GET https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-100700
	I0807 17:56:18.233194    9640 round_trippers.go:469] Request Headers:
	I0807 17:56:18.233194    9640 round_trippers.go:473]     Accept: application/json, */*
	I0807 17:56:18.233194    9640 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 17:56:18.239616    9640 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0807 17:56:18.239616    9640 round_trippers.go:577] Response Headers:
	I0807 17:56:18.239616    9640 round_trippers.go:580]     Date: Wed, 07 Aug 2024 17:56:18 GMT
	I0807 17:56:18.239616    9640 round_trippers.go:580]     Audit-Id: fc2b1038-5546-4b27-9e8b-51be9c17d098
	I0807 17:56:18.239616    9640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 17:56:18.239616    9640 round_trippers.go:580]     Content-Type: application/json
	I0807 17:56:18.239616    9640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 188f7f8b-bd84-432a-9372-91c329f39bfc
	I0807 17:56:18.239616    9640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 858f2d38-9d61-42b7-a235-674c73f98f65
	I0807 17:56:18.239616    9640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-100700","namespace":"kube-system","uid":"1f265aaa-5e7b-41b5-9fa6-ce6a907b24a9","resourceVersion":"571","creationTimestamp":"2024-08-07T17:53:24Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e0701ecea101733c14207f9cb54d1dbe","kubernetes.io/config.mirror":"e0701ecea101733c14207f9cb54d1dbe","kubernetes.io/config.seen":"2024-08-07T17:53:23.867537490Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-100700","uid":"bf51b3aa-e152-4402-bf4d-a2293e078735","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-07T17:53:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 7739 chars]
	I0807 17:56:18.240336    9640 round_trippers.go:463] GET https://172.28.235.211:8441/api/v1/nodes/functional-100700
	I0807 17:56:18.240865    9640 round_trippers.go:469] Request Headers:
	I0807 17:56:18.240865    9640 round_trippers.go:473]     Accept: application/json, */*
	I0807 17:56:18.240865    9640 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 17:56:18.243741    9640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 17:56:18.243741    9640 round_trippers.go:577] Response Headers:
	I0807 17:56:18.243741    9640 round_trippers.go:580]     Audit-Id: 68e6374a-2103-497c-a32b-dc4844fe61d4
	I0807 17:56:18.243741    9640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 17:56:18.243741    9640 round_trippers.go:580]     Content-Type: application/json
	I0807 17:56:18.243741    9640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 188f7f8b-bd84-432a-9372-91c329f39bfc
	I0807 17:56:18.243741    9640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 858f2d38-9d61-42b7-a235-674c73f98f65
	I0807 17:56:18.243741    9640 round_trippers.go:580]     Date: Wed, 07 Aug 2024 17:56:18 GMT
	I0807 17:56:18.243741    9640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-100700","uid":"bf51b3aa-e152-4402-bf4d-a2293e078735","resourceVersion":"538","creationTimestamp":"2024-08-07T17:53:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-100700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"functional-100700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T17_53_24_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-08-07T17:53:20Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0807 17:56:18.244803    9640 pod_ready.go:102] pod "kube-controller-manager-functional-100700" in "kube-system" namespace has status "Ready":"False"
	I0807 17:56:18.733401    9640 round_trippers.go:463] GET https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-100700
	I0807 17:56:18.733401    9640 round_trippers.go:469] Request Headers:
	I0807 17:56:18.733401    9640 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 17:56:18.733401    9640 round_trippers.go:473]     Accept: application/json, */*
	I0807 17:56:18.735977    9640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 17:56:18.736983    9640 round_trippers.go:577] Response Headers:
	I0807 17:56:18.737005    9640 round_trippers.go:580]     Audit-Id: f34169e9-dcfb-45de-8c63-f6df07fa7d53
	I0807 17:56:18.737005    9640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 17:56:18.737005    9640 round_trippers.go:580]     Content-Type: application/json
	I0807 17:56:18.737005    9640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 188f7f8b-bd84-432a-9372-91c329f39bfc
	I0807 17:56:18.737005    9640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 858f2d38-9d61-42b7-a235-674c73f98f65
	I0807 17:56:18.737005    9640 round_trippers.go:580]     Date: Wed, 07 Aug 2024 17:56:18 GMT
	I0807 17:56:18.737472    9640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-100700","namespace":"kube-system","uid":"1f265aaa-5e7b-41b5-9fa6-ce6a907b24a9","resourceVersion":"571","creationTimestamp":"2024-08-07T17:53:24Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e0701ecea101733c14207f9cb54d1dbe","kubernetes.io/config.mirror":"e0701ecea101733c14207f9cb54d1dbe","kubernetes.io/config.seen":"2024-08-07T17:53:23.867537490Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-100700","uid":"bf51b3aa-e152-4402-bf4d-a2293e078735","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-07T17:53:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 7739 chars]
	I0807 17:56:18.738367    9640 round_trippers.go:463] GET https://172.28.235.211:8441/api/v1/nodes/functional-100700
	I0807 17:56:18.738571    9640 round_trippers.go:469] Request Headers:
	I0807 17:56:18.738645    9640 round_trippers.go:473]     Accept: application/json, */*
	I0807 17:56:18.738645    9640 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 17:56:18.741223    9640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0807 17:56:18.741223    9640 round_trippers.go:577] Response Headers:
	I0807 17:56:18.741223    9640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 188f7f8b-bd84-432a-9372-91c329f39bfc
	I0807 17:56:18.741223    9640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 858f2d38-9d61-42b7-a235-674c73f98f65
	I0807 17:56:18.741223    9640 round_trippers.go:580]     Date: Wed, 07 Aug 2024 17:56:18 GMT
	I0807 17:56:18.741223    9640 round_trippers.go:580]     Audit-Id: fd9ab6b1-c45e-4c1f-9668-a43ab3493baf
	I0807 17:56:18.741223    9640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 17:56:18.741301    9640 round_trippers.go:580]     Content-Type: application/json
	I0807 17:56:18.741630    9640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-100700","uid":"bf51b3aa-e152-4402-bf4d-a2293e078735","resourceVersion":"538","creationTimestamp":"2024-08-07T17:53:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-100700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"functional-100700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T17_53_24_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-08-07T17:53:20Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0807 17:56:19.246583    9640 round_trippers.go:463] GET https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-100700
	I0807 17:56:19.246583    9640 round_trippers.go:469] Request Headers:
	I0807 17:56:19.246583    9640 round_trippers.go:473]     Accept: application/json, */*
	I0807 17:56:19.246698    9640 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 17:56:19.252352    9640 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 17:56:19.252445    9640 round_trippers.go:577] Response Headers:
	I0807 17:56:19.252505    9640 round_trippers.go:580]     Audit-Id: 95758b37-1796-407f-b664-c2f6cacfde2f
	I0807 17:56:19.252505    9640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 17:56:19.252505    9640 round_trippers.go:580]     Content-Type: application/json
	I0807 17:56:19.252505    9640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 188f7f8b-bd84-432a-9372-91c329f39bfc
	I0807 17:56:19.252505    9640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 858f2d38-9d61-42b7-a235-674c73f98f65
	I0807 17:56:19.252505    9640 round_trippers.go:580]     Date: Wed, 07 Aug 2024 17:56:19 GMT
	I0807 17:56:19.253779    9640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-100700","namespace":"kube-system","uid":"1f265aaa-5e7b-41b5-9fa6-ce6a907b24a9","resourceVersion":"571","creationTimestamp":"2024-08-07T17:53:24Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e0701ecea101733c14207f9cb54d1dbe","kubernetes.io/config.mirror":"e0701ecea101733c14207f9cb54d1dbe","kubernetes.io/config.seen":"2024-08-07T17:53:23.867537490Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-100700","uid":"bf51b3aa-e152-4402-bf4d-a2293e078735","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-07T17:53:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 7739 chars]
	I0807 17:56:19.254517    9640 round_trippers.go:463] GET https://172.28.235.211:8441/api/v1/nodes/functional-100700
	I0807 17:56:19.254517    9640 round_trippers.go:469] Request Headers:
	I0807 17:56:19.254517    9640 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 17:56:19.254517    9640 round_trippers.go:473]     Accept: application/json, */*
	I0807 17:56:19.257439    9640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 17:56:19.257780    9640 round_trippers.go:577] Response Headers:
	I0807 17:56:19.257780    9640 round_trippers.go:580]     Date: Wed, 07 Aug 2024 17:56:19 GMT
	I0807 17:56:19.257780    9640 round_trippers.go:580]     Audit-Id: 2b5db9e1-6ad7-4dd7-bcac-17d29cfe2acf
	I0807 17:56:19.257780    9640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 17:56:19.257859    9640 round_trippers.go:580]     Content-Type: application/json
	I0807 17:56:19.257859    9640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 188f7f8b-bd84-432a-9372-91c329f39bfc
	I0807 17:56:19.257859    9640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 858f2d38-9d61-42b7-a235-674c73f98f65
	I0807 17:56:19.257890    9640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-100700","uid":"bf51b3aa-e152-4402-bf4d-a2293e078735","resourceVersion":"538","creationTimestamp":"2024-08-07T17:53:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-100700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"functional-100700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T17_53_24_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-08-07T17:53:20Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0807 17:56:19.748175    9640 round_trippers.go:463] GET https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-100700
	I0807 17:56:19.748175    9640 round_trippers.go:469] Request Headers:
	I0807 17:56:19.748175    9640 round_trippers.go:473]     Accept: application/json, */*
	I0807 17:56:19.748175    9640 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 17:56:19.752347    9640 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 17:56:19.752347    9640 round_trippers.go:577] Response Headers:
	I0807 17:56:19.752442    9640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 188f7f8b-bd84-432a-9372-91c329f39bfc
	I0807 17:56:19.752442    9640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 858f2d38-9d61-42b7-a235-674c73f98f65
	I0807 17:56:19.752442    9640 round_trippers.go:580]     Date: Wed, 07 Aug 2024 17:56:19 GMT
	I0807 17:56:19.752442    9640 round_trippers.go:580]     Audit-Id: 6387f47f-bf72-4f5d-ba6b-76bc27c9932e
	I0807 17:56:19.752442    9640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 17:56:19.752442    9640 round_trippers.go:580]     Content-Type: application/json
	I0807 17:56:19.752903    9640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-100700","namespace":"kube-system","uid":"1f265aaa-5e7b-41b5-9fa6-ce6a907b24a9","resourceVersion":"617","creationTimestamp":"2024-08-07T17:53:24Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e0701ecea101733c14207f9cb54d1dbe","kubernetes.io/config.mirror":"e0701ecea101733c14207f9cb54d1dbe","kubernetes.io/config.seen":"2024-08-07T17:53:23.867537490Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-100700","uid":"bf51b3aa-e152-4402-bf4d-a2293e078735","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-07T17:53:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 7477 chars]
	I0807 17:56:19.753776    9640 round_trippers.go:463] GET https://172.28.235.211:8441/api/v1/nodes/functional-100700
	I0807 17:56:19.753832    9640 round_trippers.go:469] Request Headers:
	I0807 17:56:19.753832    9640 round_trippers.go:473]     Accept: application/json, */*
	I0807 17:56:19.753832    9640 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 17:56:19.758674    9640 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 17:56:19.758674    9640 round_trippers.go:577] Response Headers:
	I0807 17:56:19.758674    9640 round_trippers.go:580]     Audit-Id: 84f4c24e-a425-4425-8f30-9fd6a213084c
	I0807 17:56:19.758674    9640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 17:56:19.758674    9640 round_trippers.go:580]     Content-Type: application/json
	I0807 17:56:19.758674    9640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 188f7f8b-bd84-432a-9372-91c329f39bfc
	I0807 17:56:19.758674    9640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 858f2d38-9d61-42b7-a235-674c73f98f65
	I0807 17:56:19.758674    9640 round_trippers.go:580]     Date: Wed, 07 Aug 2024 17:56:19 GMT
	I0807 17:56:19.758674    9640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-100700","uid":"bf51b3aa-e152-4402-bf4d-a2293e078735","resourceVersion":"538","creationTimestamp":"2024-08-07T17:53:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-100700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"functional-100700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T17_53_24_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-08-07T17:53:20Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0807 17:56:19.759675    9640 pod_ready.go:92] pod "kube-controller-manager-functional-100700" in "kube-system" namespace has status "Ready":"True"
	I0807 17:56:19.759675    9640 pod_ready.go:81] duration metric: took 3.5273919s for pod "kube-controller-manager-functional-100700" in "kube-system" namespace to be "Ready" ...
	I0807 17:56:19.759675    9640 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-fhgrj" in "kube-system" namespace to be "Ready" ...
	I0807 17:56:19.759675    9640 round_trippers.go:463] GET https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods/kube-proxy-fhgrj
	I0807 17:56:19.759675    9640 round_trippers.go:469] Request Headers:
	I0807 17:56:19.759675    9640 round_trippers.go:473]     Accept: application/json, */*
	I0807 17:56:19.759675    9640 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 17:56:19.763706    9640 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 17:56:19.763706    9640 round_trippers.go:577] Response Headers:
	I0807 17:56:19.763706    9640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 17:56:19.763706    9640 round_trippers.go:580]     Content-Type: application/json
	I0807 17:56:19.763706    9640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 188f7f8b-bd84-432a-9372-91c329f39bfc
	I0807 17:56:19.763706    9640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 858f2d38-9d61-42b7-a235-674c73f98f65
	I0807 17:56:19.763706    9640 round_trippers.go:580]     Date: Wed, 07 Aug 2024 17:56:19 GMT
	I0807 17:56:19.763706    9640 round_trippers.go:580]     Audit-Id: 7e29a83f-429a-43d7-9513-445419d27999
	I0807 17:56:19.763706    9640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-fhgrj","generateName":"kube-proxy-","namespace":"kube-system","uid":"7777c5e7-cff4-448e-9880-a3b6c6264025","resourceVersion":"602","creationTimestamp":"2024-08-07T17:53:39Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"42123b7c-ddc5-4334-ae76-0b8514f42bb5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T17:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"42123b7c-ddc5-4334-ae76-0b8514f42bb5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6040 chars]
	I0807 17:56:19.763706    9640 round_trippers.go:463] GET https://172.28.235.211:8441/api/v1/nodes/functional-100700
	I0807 17:56:19.763706    9640 round_trippers.go:469] Request Headers:
	I0807 17:56:19.763706    9640 round_trippers.go:473]     Accept: application/json, */*
	I0807 17:56:19.763706    9640 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 17:56:19.767679    9640 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 17:56:19.767679    9640 round_trippers.go:577] Response Headers:
	I0807 17:56:19.767679    9640 round_trippers.go:580]     Audit-Id: 8c5eb360-c4c9-4cd3-aa4c-13ab9f7ece7d
	I0807 17:56:19.767679    9640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 17:56:19.767679    9640 round_trippers.go:580]     Content-Type: application/json
	I0807 17:56:19.767679    9640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 188f7f8b-bd84-432a-9372-91c329f39bfc
	I0807 17:56:19.767679    9640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 858f2d38-9d61-42b7-a235-674c73f98f65
	I0807 17:56:19.767679    9640 round_trippers.go:580]     Date: Wed, 07 Aug 2024 17:56:19 GMT
	I0807 17:56:19.767679    9640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-100700","uid":"bf51b3aa-e152-4402-bf4d-a2293e078735","resourceVersion":"538","creationTimestamp":"2024-08-07T17:53:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-100700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"functional-100700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T17_53_24_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-08-07T17:53:20Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0807 17:56:19.768670    9640 pod_ready.go:92] pod "kube-proxy-fhgrj" in "kube-system" namespace has status "Ready":"True"
	I0807 17:56:19.768670    9640 pod_ready.go:81] duration metric: took 8.9949ms for pod "kube-proxy-fhgrj" in "kube-system" namespace to be "Ready" ...
	I0807 17:56:19.768670    9640 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-100700" in "kube-system" namespace to be "Ready" ...
	I0807 17:56:19.768670    9640 round_trippers.go:463] GET https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-100700
	I0807 17:56:19.768670    9640 round_trippers.go:469] Request Headers:
	I0807 17:56:19.768670    9640 round_trippers.go:473]     Accept: application/json, */*
	I0807 17:56:19.768670    9640 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 17:56:19.773671    9640 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 17:56:19.773671    9640 round_trippers.go:577] Response Headers:
	I0807 17:56:19.773671    9640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 188f7f8b-bd84-432a-9372-91c329f39bfc
	I0807 17:56:19.773671    9640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 858f2d38-9d61-42b7-a235-674c73f98f65
	I0807 17:56:19.773671    9640 round_trippers.go:580]     Date: Wed, 07 Aug 2024 17:56:19 GMT
	I0807 17:56:19.773671    9640 round_trippers.go:580]     Audit-Id: 35cc8828-56ee-4e76-b65e-a78f4e67f5d4
	I0807 17:56:19.773671    9640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 17:56:19.773671    9640 round_trippers.go:580]     Content-Type: application/json
	I0807 17:56:19.774172    9640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-100700","namespace":"kube-system","uid":"1dbe4230-2246-468f-abe1-077025453579","resourceVersion":"615","creationTimestamp":"2024-08-07T17:53:24Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ea81f7b606b5a38eedc0e1fd20aaeb7b","kubernetes.io/config.mirror":"ea81f7b606b5a38eedc0e1fd20aaeb7b","kubernetes.io/config.seen":"2024-08-07T17:53:23.867538890Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-100700","uid":"bf51b3aa-e152-4402-bf4d-a2293e078735","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-07T17:53:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5207 chars]
	I0807 17:56:19.774478    9640 round_trippers.go:463] GET https://172.28.235.211:8441/api/v1/nodes/functional-100700
	I0807 17:56:19.774478    9640 round_trippers.go:469] Request Headers:
	I0807 17:56:19.774478    9640 round_trippers.go:473]     Accept: application/json, */*
	I0807 17:56:19.774478    9640 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 17:56:19.798781    9640 round_trippers.go:574] Response Status: 200 OK in 24 milliseconds
	I0807 17:56:19.799397    9640 round_trippers.go:577] Response Headers:
	I0807 17:56:19.799397    9640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 188f7f8b-bd84-432a-9372-91c329f39bfc
	I0807 17:56:19.799397    9640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 858f2d38-9d61-42b7-a235-674c73f98f65
	I0807 17:56:19.799397    9640 round_trippers.go:580]     Date: Wed, 07 Aug 2024 17:56:19 GMT
	I0807 17:56:19.799397    9640 round_trippers.go:580]     Audit-Id: d62cd99e-0fdd-4df0-9f79-97b1deee5858
	I0807 17:56:19.799397    9640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 17:56:19.799397    9640 round_trippers.go:580]     Content-Type: application/json
	I0807 17:56:19.800191    9640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-100700","uid":"bf51b3aa-e152-4402-bf4d-a2293e078735","resourceVersion":"538","creationTimestamp":"2024-08-07T17:53:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-100700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"functional-100700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T17_53_24_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-08-07T17:53:20Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0807 17:56:19.800721    9640 pod_ready.go:92] pod "kube-scheduler-functional-100700" in "kube-system" namespace has status "Ready":"True"
	I0807 17:56:19.800721    9640 pod_ready.go:81] duration metric: took 32.0505ms for pod "kube-scheduler-functional-100700" in "kube-system" namespace to be "Ready" ...
	I0807 17:56:19.800876    9640 pod_ready.go:38] duration metric: took 10.139057s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0807 17:56:19.800876    9640 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0807 17:56:19.825645    9640 command_runner.go:130] > -16
	I0807 17:56:19.825645    9640 ops.go:34] apiserver oom_adj: -16
	I0807 17:56:19.825645    9640 kubeadm.go:597] duration metric: took 20.8637602s to restartPrimaryControlPlane
	I0807 17:56:19.825645    9640 kubeadm.go:394] duration metric: took 20.938536s to StartCluster
	I0807 17:56:19.825645    9640 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 17:56:19.826173    9640 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0807 17:56:19.828046    9640 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 17:56:19.829602    9640 start.go:235] Will wait 6m0s for node &{Name: IP:172.28.235.211 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0807 17:56:19.829602    9640 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0807 17:56:19.829602    9640 addons.go:69] Setting storage-provisioner=true in profile "functional-100700"
	I0807 17:56:19.829602    9640 addons.go:69] Setting default-storageclass=true in profile "functional-100700"
	I0807 17:56:19.829602    9640 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-100700"
	I0807 17:56:19.829602    9640 addons.go:234] Setting addon storage-provisioner=true in "functional-100700"
	W0807 17:56:19.830132    9640 addons.go:243] addon storage-provisioner should already be in state true
	I0807 17:56:19.829602    9640 config.go:182] Loaded profile config "functional-100700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 17:56:19.830311    9640 host.go:66] Checking if "functional-100700" exists ...
	I0807 17:56:19.831396    9640 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:56:19.831396    9640 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:56:19.836848    9640 out.go:177] * Verifying Kubernetes components...
	I0807 17:56:19.855789    9640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 17:56:20.211215    9640 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0807 17:56:20.248638    9640 node_ready.go:35] waiting up to 6m0s for node "functional-100700" to be "Ready" ...
	I0807 17:56:20.249039    9640 round_trippers.go:463] GET https://172.28.235.211:8441/api/v1/nodes/functional-100700
	I0807 17:56:20.249175    9640 round_trippers.go:469] Request Headers:
	I0807 17:56:20.249175    9640 round_trippers.go:473]     Accept: application/json, */*
	I0807 17:56:20.249175    9640 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 17:56:20.253520    9640 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 17:56:20.253784    9640 round_trippers.go:577] Response Headers:
	I0807 17:56:20.253881    9640 round_trippers.go:580]     Content-Type: application/json
	I0807 17:56:20.253881    9640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 188f7f8b-bd84-432a-9372-91c329f39bfc
	I0807 17:56:20.253923    9640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 858f2d38-9d61-42b7-a235-674c73f98f65
	I0807 17:56:20.253923    9640 round_trippers.go:580]     Date: Wed, 07 Aug 2024 17:56:20 GMT
	I0807 17:56:20.253923    9640 round_trippers.go:580]     Audit-Id: b0d5d4c1-047b-44b1-beb0-8bb2b68c908d
	I0807 17:56:20.253923    9640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 17:56:20.254515    9640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-100700","uid":"bf51b3aa-e152-4402-bf4d-a2293e078735","resourceVersion":"538","creationTimestamp":"2024-08-07T17:53:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-100700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"functional-100700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T17_53_24_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-08-07T17:53:20Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0807 17:56:20.258522    9640 node_ready.go:49] node "functional-100700" has status "Ready":"True"
	I0807 17:56:20.258522    9640 node_ready.go:38] duration metric: took 9.7774ms for node "functional-100700" to be "Ready" ...
	I0807 17:56:20.258522    9640 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0807 17:56:20.258522    9640 round_trippers.go:463] GET https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods
	I0807 17:56:20.258522    9640 round_trippers.go:469] Request Headers:
	I0807 17:56:20.258522    9640 round_trippers.go:473]     Accept: application/json, */*
	I0807 17:56:20.258522    9640 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 17:56:20.267545    9640 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0807 17:56:20.267545    9640 round_trippers.go:577] Response Headers:
	I0807 17:56:20.267545    9640 round_trippers.go:580]     Audit-Id: 474f38a6-8672-4889-b961-3187f45bdfa9
	I0807 17:56:20.267545    9640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 17:56:20.267545    9640 round_trippers.go:580]     Content-Type: application/json
	I0807 17:56:20.267545    9640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 188f7f8b-bd84-432a-9372-91c329f39bfc
	I0807 17:56:20.267545    9640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 858f2d38-9d61-42b7-a235-674c73f98f65
	I0807 17:56:20.267545    9640 round_trippers.go:580]     Date: Wed, 07 Aug 2024 17:56:20 GMT
	I0807 17:56:20.269566    9640 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"622"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-wwrwt","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5ddd5aa0-8aab-423c-855d-b8ea1633db28","resourceVersion":"607","creationTimestamp":"2024-08-07T17:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"47e7e42e-7133-43ea-9d53-4694600f4ac1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T17:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"47e7e42e-7133-43ea-9d53-4694600f4ac1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 50141 chars]
	I0807 17:56:20.272514    9640 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-wwrwt" in "kube-system" namespace to be "Ready" ...
	I0807 17:56:20.273513    9640 round_trippers.go:463] GET https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-wwrwt
	I0807 17:56:20.273513    9640 round_trippers.go:469] Request Headers:
	I0807 17:56:20.273513    9640 round_trippers.go:473]     Accept: application/json, */*
	I0807 17:56:20.273513    9640 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 17:56:20.279539    9640 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0807 17:56:20.279907    9640 round_trippers.go:577] Response Headers:
	I0807 17:56:20.279907    9640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 17:56:20.279907    9640 round_trippers.go:580]     Content-Type: application/json
	I0807 17:56:20.279982    9640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 188f7f8b-bd84-432a-9372-91c329f39bfc
	I0807 17:56:20.279982    9640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 858f2d38-9d61-42b7-a235-674c73f98f65
	I0807 17:56:20.280070    9640 round_trippers.go:580]     Date: Wed, 07 Aug 2024 17:56:20 GMT
	I0807 17:56:20.280095    9640 round_trippers.go:580]     Audit-Id: 90dea9c7-41cb-4beb-86c8-e08969c54d0f
	I0807 17:56:20.280385    9640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-wwrwt","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5ddd5aa0-8aab-423c-855d-b8ea1633db28","resourceVersion":"607","creationTimestamp":"2024-08-07T17:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"47e7e42e-7133-43ea-9d53-4694600f4ac1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T17:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"47e7e42e-7133-43ea-9d53-4694600f4ac1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6452 chars]
	I0807 17:56:20.280817    9640 round_trippers.go:463] GET https://172.28.235.211:8441/api/v1/nodes/functional-100700
	I0807 17:56:20.280817    9640 round_trippers.go:469] Request Headers:
	I0807 17:56:20.280817    9640 round_trippers.go:473]     Accept: application/json, */*
	I0807 17:56:20.280817    9640 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 17:56:20.285554    9640 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 17:56:20.285641    9640 round_trippers.go:577] Response Headers:
	I0807 17:56:20.285683    9640 round_trippers.go:580]     Date: Wed, 07 Aug 2024 17:56:20 GMT
	I0807 17:56:20.285683    9640 round_trippers.go:580]     Audit-Id: 72003ea9-e0b3-4b9a-bba2-0323d45a8f0b
	I0807 17:56:20.285683    9640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 17:56:20.285683    9640 round_trippers.go:580]     Content-Type: application/json
	I0807 17:56:20.285683    9640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 188f7f8b-bd84-432a-9372-91c329f39bfc
	I0807 17:56:20.285683    9640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 858f2d38-9d61-42b7-a235-674c73f98f65
	I0807 17:56:20.287106    9640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-100700","uid":"bf51b3aa-e152-4402-bf4d-a2293e078735","resourceVersion":"538","creationTimestamp":"2024-08-07T17:53:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-100700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"functional-100700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T17_53_24_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-08-07T17:53:20Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0807 17:56:20.287299    9640 pod_ready.go:92] pod "coredns-7db6d8ff4d-wwrwt" in "kube-system" namespace has status "Ready":"True"
	I0807 17:56:20.287299    9640 pod_ready.go:81] duration metric: took 14.7853ms for pod "coredns-7db6d8ff4d-wwrwt" in "kube-system" namespace to be "Ready" ...
	I0807 17:56:20.287299    9640 pod_ready.go:78] waiting up to 6m0s for pod "etcd-functional-100700" in "kube-system" namespace to be "Ready" ...
	I0807 17:56:20.287931    9640 round_trippers.go:463] GET https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods/etcd-functional-100700
	I0807 17:56:20.287931    9640 round_trippers.go:469] Request Headers:
	I0807 17:56:20.287931    9640 round_trippers.go:473]     Accept: application/json, */*
	I0807 17:56:20.287931    9640 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 17:56:20.294752    9640 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0807 17:56:20.294752    9640 round_trippers.go:577] Response Headers:
	I0807 17:56:20.294752    9640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 17:56:20.294752    9640 round_trippers.go:580]     Content-Type: application/json
	I0807 17:56:20.294752    9640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 188f7f8b-bd84-432a-9372-91c329f39bfc
	I0807 17:56:20.294752    9640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 858f2d38-9d61-42b7-a235-674c73f98f65
	I0807 17:56:20.294752    9640 round_trippers.go:580]     Date: Wed, 07 Aug 2024 17:56:20 GMT
	I0807 17:56:20.294752    9640 round_trippers.go:580]     Audit-Id: 8a835b6b-5168-426e-b58a-c141c17a53bd
	I0807 17:56:20.295755    9640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-100700","namespace":"kube-system","uid":"a5c7f27d-b19f-437f-a53a-2bf11d9ac9ca","resourceVersion":"609","creationTimestamp":"2024-08-07T17:53:24Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.28.235.211:2379","kubernetes.io/config.hash":"beb9bcbcae0a46c5b0c329e08dd8f948","kubernetes.io/config.mirror":"beb9bcbcae0a46c5b0c329e08dd8f948","kubernetes.io/config.seen":"2024-08-07T17:53:23.867528890Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-100700","uid":"bf51b3aa-e152-4402-bf4d-a2293e078735","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-07T17:53:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6384 chars]
	I0807 17:56:20.295755    9640 round_trippers.go:463] GET https://172.28.235.211:8441/api/v1/nodes/functional-100700
	I0807 17:56:20.295755    9640 round_trippers.go:469] Request Headers:
	I0807 17:56:20.295755    9640 round_trippers.go:473]     Accept: application/json, */*
	I0807 17:56:20.295755    9640 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 17:56:20.298749    9640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 17:56:20.298749    9640 round_trippers.go:577] Response Headers:
	I0807 17:56:20.298749    9640 round_trippers.go:580]     Audit-Id: 97fcbd1c-aade-4cef-ab8a-85402dd70a41
	I0807 17:56:20.298749    9640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 17:56:20.298749    9640 round_trippers.go:580]     Content-Type: application/json
	I0807 17:56:20.298749    9640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 188f7f8b-bd84-432a-9372-91c329f39bfc
	I0807 17:56:20.298749    9640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 858f2d38-9d61-42b7-a235-674c73f98f65
	I0807 17:56:20.298749    9640 round_trippers.go:580]     Date: Wed, 07 Aug 2024 17:56:20 GMT
	I0807 17:56:20.298749    9640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-100700","uid":"bf51b3aa-e152-4402-bf4d-a2293e078735","resourceVersion":"538","creationTimestamp":"2024-08-07T17:53:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-100700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"functional-100700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T17_53_24_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-08-07T17:53:20Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0807 17:56:20.299754    9640 pod_ready.go:92] pod "etcd-functional-100700" in "kube-system" namespace has status "Ready":"True"
	I0807 17:56:20.299754    9640 pod_ready.go:81] duration metric: took 12.4544ms for pod "etcd-functional-100700" in "kube-system" namespace to be "Ready" ...
	I0807 17:56:20.299754    9640 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-functional-100700" in "kube-system" namespace to be "Ready" ...
	I0807 17:56:20.348887    9640 round_trippers.go:463] GET https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-100700
	I0807 17:56:20.348887    9640 round_trippers.go:469] Request Headers:
	I0807 17:56:20.349159    9640 round_trippers.go:473]     Accept: application/json, */*
	I0807 17:56:20.349159    9640 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 17:56:20.357860    9640 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0807 17:56:20.357860    9640 round_trippers.go:577] Response Headers:
	I0807 17:56:20.357978    9640 round_trippers.go:580]     Audit-Id: 58aa1bc9-611a-4068-9187-f04ac59b1f8b
	I0807 17:56:20.357978    9640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 17:56:20.357978    9640 round_trippers.go:580]     Content-Type: application/json
	I0807 17:56:20.357978    9640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 188f7f8b-bd84-432a-9372-91c329f39bfc
	I0807 17:56:20.358027    9640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 858f2d38-9d61-42b7-a235-674c73f98f65
	I0807 17:56:20.358027    9640 round_trippers.go:580]     Date: Wed, 07 Aug 2024 17:56:20 GMT
	I0807 17:56:20.358092    9640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-100700","namespace":"kube-system","uid":"5249a821-fdb0-4a53-9e1e-ff9336ba130f","resourceVersion":"611","creationTimestamp":"2024-08-07T17:53:22Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.28.235.211:8441","kubernetes.io/config.hash":"00b3db9060a30b06edb713820a5caeb5","kubernetes.io/config.mirror":"00b3db9060a30b06edb713820a5caeb5","kubernetes.io/config.seen":"2024-08-07T17:53:15.631450444Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-100700","uid":"bf51b3aa-e152-4402-bf4d-a2293e078735","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-07T17:53:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.
kubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernet [truncated 7914 chars]
	I0807 17:56:20.555126    9640 request.go:629] Waited for 195.801ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.235.211:8441/api/v1/nodes/functional-100700
	I0807 17:56:20.555191    9640 round_trippers.go:463] GET https://172.28.235.211:8441/api/v1/nodes/functional-100700
	I0807 17:56:20.555191    9640 round_trippers.go:469] Request Headers:
	I0807 17:56:20.555191    9640 round_trippers.go:473]     Accept: application/json, */*
	I0807 17:56:20.555191    9640 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 17:56:20.560938    9640 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 17:56:20.560938    9640 round_trippers.go:577] Response Headers:
	I0807 17:56:20.560938    9640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 17:56:20.560938    9640 round_trippers.go:580]     Content-Type: application/json
	I0807 17:56:20.560938    9640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 188f7f8b-bd84-432a-9372-91c329f39bfc
	I0807 17:56:20.560938    9640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 858f2d38-9d61-42b7-a235-674c73f98f65
	I0807 17:56:20.560938    9640 round_trippers.go:580]     Date: Wed, 07 Aug 2024 17:56:20 GMT
	I0807 17:56:20.560938    9640 round_trippers.go:580]     Audit-Id: 3f4671f9-4fa9-487c-a993-ff68daeea1a4
	I0807 17:56:20.561608    9640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-100700","uid":"bf51b3aa-e152-4402-bf4d-a2293e078735","resourceVersion":"538","creationTimestamp":"2024-08-07T17:53:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-100700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"functional-100700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T17_53_24_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-08-07T17:53:20Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0807 17:56:20.562070    9640 pod_ready.go:92] pod "kube-apiserver-functional-100700" in "kube-system" namespace has status "Ready":"True"
	I0807 17:56:20.562166    9640 pod_ready.go:81] duration metric: took 262.3127ms for pod "kube-apiserver-functional-100700" in "kube-system" namespace to be "Ready" ...
	I0807 17:56:20.562166    9640 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-functional-100700" in "kube-system" namespace to be "Ready" ...
	I0807 17:56:20.761540    9640 request.go:629] Waited for 199.3715ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-100700
	I0807 17:56:20.761850    9640 round_trippers.go:463] GET https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-100700
	I0807 17:56:20.761976    9640 round_trippers.go:469] Request Headers:
	I0807 17:56:20.761976    9640 round_trippers.go:473]     Accept: application/json, */*
	I0807 17:56:20.761976    9640 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 17:56:20.765395    9640 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 17:56:20.765772    9640 round_trippers.go:577] Response Headers:
	I0807 17:56:20.765772    9640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 188f7f8b-bd84-432a-9372-91c329f39bfc
	I0807 17:56:20.765772    9640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 858f2d38-9d61-42b7-a235-674c73f98f65
	I0807 17:56:20.765772    9640 round_trippers.go:580]     Date: Wed, 07 Aug 2024 17:56:20 GMT
	I0807 17:56:20.765772    9640 round_trippers.go:580]     Audit-Id: d995b548-7312-4b40-a2ea-d5d5d945a98a
	I0807 17:56:20.765772    9640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 17:56:20.765772    9640 round_trippers.go:580]     Content-Type: application/json
	I0807 17:56:20.766211    9640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-100700","namespace":"kube-system","uid":"1f265aaa-5e7b-41b5-9fa6-ce6a907b24a9","resourceVersion":"617","creationTimestamp":"2024-08-07T17:53:24Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e0701ecea101733c14207f9cb54d1dbe","kubernetes.io/config.mirror":"e0701ecea101733c14207f9cb54d1dbe","kubernetes.io/config.seen":"2024-08-07T17:53:23.867537490Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-100700","uid":"bf51b3aa-e152-4402-bf4d-a2293e078735","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-07T17:53:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 7477 chars]
	I0807 17:56:20.952111    9640 request.go:629] Waited for 184.7274ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.235.211:8441/api/v1/nodes/functional-100700
	I0807 17:56:20.952438    9640 round_trippers.go:463] GET https://172.28.235.211:8441/api/v1/nodes/functional-100700
	I0807 17:56:20.952438    9640 round_trippers.go:469] Request Headers:
	I0807 17:56:20.952438    9640 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 17:56:20.952438    9640 round_trippers.go:473]     Accept: application/json, */*
	I0807 17:56:20.955965    9640 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 17:56:20.956862    9640 round_trippers.go:577] Response Headers:
	I0807 17:56:20.956862    9640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 17:56:20.956862    9640 round_trippers.go:580]     Content-Type: application/json
	I0807 17:56:20.956862    9640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 188f7f8b-bd84-432a-9372-91c329f39bfc
	I0807 17:56:20.956862    9640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 858f2d38-9d61-42b7-a235-674c73f98f65
	I0807 17:56:20.956862    9640 round_trippers.go:580]     Date: Wed, 07 Aug 2024 17:56:20 GMT
	I0807 17:56:20.956862    9640 round_trippers.go:580]     Audit-Id: 2d5835b0-871a-4b8f-a142-a2316b17bdf2
	I0807 17:56:20.957407    9640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-100700","uid":"bf51b3aa-e152-4402-bf4d-a2293e078735","resourceVersion":"538","creationTimestamp":"2024-08-07T17:53:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-100700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"functional-100700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T17_53_24_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-08-07T17:53:20Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0807 17:56:20.958006    9640 pod_ready.go:92] pod "kube-controller-manager-functional-100700" in "kube-system" namespace has status "Ready":"True"
	I0807 17:56:20.958006    9640 pod_ready.go:81] duration metric: took 395.8356ms for pod "kube-controller-manager-functional-100700" in "kube-system" namespace to be "Ready" ...
	I0807 17:56:20.958006    9640 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fhgrj" in "kube-system" namespace to be "Ready" ...
	I0807 17:56:21.157942    9640 request.go:629] Waited for 199.6288ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods/kube-proxy-fhgrj
	I0807 17:56:21.158159    9640 round_trippers.go:463] GET https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods/kube-proxy-fhgrj
	I0807 17:56:21.158159    9640 round_trippers.go:469] Request Headers:
	I0807 17:56:21.158159    9640 round_trippers.go:473]     Accept: application/json, */*
	I0807 17:56:21.158159    9640 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 17:56:21.161776    9640 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 17:56:21.161776    9640 round_trippers.go:577] Response Headers:
	I0807 17:56:21.161776    9640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 17:56:21.161776    9640 round_trippers.go:580]     Content-Type: application/json
	I0807 17:56:21.161951    9640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 188f7f8b-bd84-432a-9372-91c329f39bfc
	I0807 17:56:21.161951    9640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 858f2d38-9d61-42b7-a235-674c73f98f65
	I0807 17:56:21.161951    9640 round_trippers.go:580]     Date: Wed, 07 Aug 2024 17:56:21 GMT
	I0807 17:56:21.161951    9640 round_trippers.go:580]     Audit-Id: 1a23aebc-2f16-49d7-a978-3bc550c48438
	I0807 17:56:21.162123    9640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-fhgrj","generateName":"kube-proxy-","namespace":"kube-system","uid":"7777c5e7-cff4-448e-9880-a3b6c6264025","resourceVersion":"602","creationTimestamp":"2024-08-07T17:53:39Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"42123b7c-ddc5-4334-ae76-0b8514f42bb5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T17:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"42123b7c-ddc5-4334-ae76-0b8514f42bb5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6040 chars]
	I0807 17:56:21.362639    9640 request.go:629] Waited for 199.5985ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.235.211:8441/api/v1/nodes/functional-100700
	I0807 17:56:21.362828    9640 round_trippers.go:463] GET https://172.28.235.211:8441/api/v1/nodes/functional-100700
	I0807 17:56:21.362828    9640 round_trippers.go:469] Request Headers:
	I0807 17:56:21.362828    9640 round_trippers.go:473]     Accept: application/json, */*
	I0807 17:56:21.362828    9640 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 17:56:21.366731    9640 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 17:56:21.366731    9640 round_trippers.go:577] Response Headers:
	I0807 17:56:21.366731    9640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 17:56:21.366731    9640 round_trippers.go:580]     Content-Type: application/json
	I0807 17:56:21.366731    9640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 188f7f8b-bd84-432a-9372-91c329f39bfc
	I0807 17:56:21.366731    9640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 858f2d38-9d61-42b7-a235-674c73f98f65
	I0807 17:56:21.366731    9640 round_trippers.go:580]     Date: Wed, 07 Aug 2024 17:56:21 GMT
	I0807 17:56:21.366731    9640 round_trippers.go:580]     Audit-Id: d4a526c8-0159-412d-a94d-e3d0051ae455
	I0807 17:56:21.367561    9640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-100700","uid":"bf51b3aa-e152-4402-bf4d-a2293e078735","resourceVersion":"538","creationTimestamp":"2024-08-07T17:53:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-100700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"functional-100700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T17_53_24_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-08-07T17:53:20Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0807 17:56:21.368549    9640 pod_ready.go:92] pod "kube-proxy-fhgrj" in "kube-system" namespace has status "Ready":"True"
	I0807 17:56:21.368572    9640 pod_ready.go:81] duration metric: took 410.4363ms for pod "kube-proxy-fhgrj" in "kube-system" namespace to be "Ready" ...
	I0807 17:56:21.368572    9640 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-functional-100700" in "kube-system" namespace to be "Ready" ...
	I0807 17:56:21.552046    9640 request.go:629] Waited for 183.4716ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-100700
	I0807 17:56:21.552133    9640 round_trippers.go:463] GET https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-100700
	I0807 17:56:21.552133    9640 round_trippers.go:469] Request Headers:
	I0807 17:56:21.552133    9640 round_trippers.go:473]     Accept: application/json, */*
	I0807 17:56:21.552133    9640 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 17:56:21.557763    9640 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 17:56:21.557804    9640 round_trippers.go:577] Response Headers:
	I0807 17:56:21.557804    9640 round_trippers.go:580]     Date: Wed, 07 Aug 2024 17:56:21 GMT
	I0807 17:56:21.557804    9640 round_trippers.go:580]     Audit-Id: 20ed9bf8-3cae-4025-b96f-6869cd178082
	I0807 17:56:21.557804    9640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 17:56:21.557804    9640 round_trippers.go:580]     Content-Type: application/json
	I0807 17:56:21.557804    9640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 188f7f8b-bd84-432a-9372-91c329f39bfc
	I0807 17:56:21.557804    9640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 858f2d38-9d61-42b7-a235-674c73f98f65
	I0807 17:56:21.558090    9640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-100700","namespace":"kube-system","uid":"1dbe4230-2246-468f-abe1-077025453579","resourceVersion":"615","creationTimestamp":"2024-08-07T17:53:24Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ea81f7b606b5a38eedc0e1fd20aaeb7b","kubernetes.io/config.mirror":"ea81f7b606b5a38eedc0e1fd20aaeb7b","kubernetes.io/config.seen":"2024-08-07T17:53:23.867538890Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-100700","uid":"bf51b3aa-e152-4402-bf4d-a2293e078735","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-07T17:53:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5207 chars]
	I0807 17:56:21.758842    9640 request.go:629] Waited for 200.1219ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.235.211:8441/api/v1/nodes/functional-100700
	I0807 17:56:21.759078    9640 round_trippers.go:463] GET https://172.28.235.211:8441/api/v1/nodes/functional-100700
	I0807 17:56:21.759160    9640 round_trippers.go:469] Request Headers:
	I0807 17:56:21.759160    9640 round_trippers.go:473]     Accept: application/json, */*
	I0807 17:56:21.759215    9640 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 17:56:21.762903    9640 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 17:56:21.762989    9640 round_trippers.go:577] Response Headers:
	I0807 17:56:21.762989    9640 round_trippers.go:580]     Audit-Id: f8f3ff3e-d7f3-4f50-9c71-f36fd3196ca9
	I0807 17:56:21.762989    9640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 17:56:21.762989    9640 round_trippers.go:580]     Content-Type: application/json
	I0807 17:56:21.762989    9640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 188f7f8b-bd84-432a-9372-91c329f39bfc
	I0807 17:56:21.762989    9640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 858f2d38-9d61-42b7-a235-674c73f98f65
	I0807 17:56:21.762989    9640 round_trippers.go:580]     Date: Wed, 07 Aug 2024 17:56:21 GMT
	I0807 17:56:21.763413    9640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-100700","uid":"bf51b3aa-e152-4402-bf4d-a2293e078735","resourceVersion":"538","creationTimestamp":"2024-08-07T17:53:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-100700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"functional-100700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T17_53_24_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-08-07T17:53:20Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0807 17:56:21.763562    9640 pod_ready.go:92] pod "kube-scheduler-functional-100700" in "kube-system" namespace has status "Ready":"True"
	I0807 17:56:21.763562    9640 pod_ready.go:81] duration metric: took 394.9847ms for pod "kube-scheduler-functional-100700" in "kube-system" namespace to be "Ready" ...
	I0807 17:56:21.763562    9640 pod_ready.go:38] duration metric: took 1.5050208s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0807 17:56:21.763562    9640 api_server.go:52] waiting for apiserver process to appear ...
	I0807 17:56:21.778409    9640 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0807 17:56:21.812081    9640 command_runner.go:130] > 5343
	I0807 17:56:21.812081    9640 api_server.go:72] duration metric: took 1.9824533s to wait for apiserver process to appear ...
	I0807 17:56:21.813103    9640 api_server.go:88] waiting for apiserver healthz status ...
	I0807 17:56:21.813103    9640 api_server.go:253] Checking apiserver healthz at https://172.28.235.211:8441/healthz ...
	I0807 17:56:21.821982    9640 api_server.go:279] https://172.28.235.211:8441/healthz returned 200:
	ok
	I0807 17:56:21.822157    9640 round_trippers.go:463] GET https://172.28.235.211:8441/version
	I0807 17:56:21.822157    9640 round_trippers.go:469] Request Headers:
	I0807 17:56:21.822157    9640 round_trippers.go:473]     Accept: application/json, */*
	I0807 17:56:21.822157    9640 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 17:56:21.823768    9640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0807 17:56:21.824198    9640 round_trippers.go:577] Response Headers:
	I0807 17:56:21.824198    9640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 17:56:21.824316    9640 round_trippers.go:580]     Content-Type: application/json
	I0807 17:56:21.824316    9640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 188f7f8b-bd84-432a-9372-91c329f39bfc
	I0807 17:56:21.824316    9640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 858f2d38-9d61-42b7-a235-674c73f98f65
	I0807 17:56:21.824376    9640 round_trippers.go:580]     Content-Length: 263
	I0807 17:56:21.824467    9640 round_trippers.go:580]     Date: Wed, 07 Aug 2024 17:56:21 GMT
	I0807 17:56:21.824467    9640 round_trippers.go:580]     Audit-Id: 312c83e9-dddc-4e79-a3b9-b6f308fc0d93
	I0807 17:56:21.824524    9640 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.3",
	  "gitCommit": "6fc0a69044f1ac4c13841ec4391224a2df241460",
	  "gitTreeState": "clean",
	  "buildDate": "2024-07-16T23:48:12Z",
	  "goVersion": "go1.22.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0807 17:56:21.824595    9640 api_server.go:141] control plane version: v1.30.3
	I0807 17:56:21.824676    9640 api_server.go:131] duration metric: took 11.5722ms to wait for apiserver health ...
	I0807 17:56:21.824750    9640 system_pods.go:43] waiting for kube-system pods to appear ...
	I0807 17:56:21.962038    9640 request.go:629] Waited for 137.2068ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods
	I0807 17:56:21.962288    9640 round_trippers.go:463] GET https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods
	I0807 17:56:21.962518    9640 round_trippers.go:469] Request Headers:
	I0807 17:56:21.962518    9640 round_trippers.go:473]     Accept: application/json, */*
	I0807 17:56:21.962518    9640 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 17:56:21.972862    9640 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0807 17:56:21.972862    9640 round_trippers.go:577] Response Headers:
	I0807 17:56:21.972862    9640 round_trippers.go:580]     Date: Wed, 07 Aug 2024 17:56:21 GMT
	I0807 17:56:21.972862    9640 round_trippers.go:580]     Audit-Id: 9f5356eb-a709-498f-b1f8-a5dbe8833e66
	I0807 17:56:21.972862    9640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 17:56:21.972862    9640 round_trippers.go:580]     Content-Type: application/json
	I0807 17:56:21.972862    9640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 188f7f8b-bd84-432a-9372-91c329f39bfc
	I0807 17:56:21.972862    9640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 858f2d38-9d61-42b7-a235-674c73f98f65
	I0807 17:56:21.974664    9640 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"622"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-wwrwt","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5ddd5aa0-8aab-423c-855d-b8ea1633db28","resourceVersion":"607","creationTimestamp":"2024-08-07T17:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"47e7e42e-7133-43ea-9d53-4694600f4ac1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T17:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"47e7e42e-7133-43ea-9d53-4694600f4ac1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 50141 chars]
	I0807 17:56:21.977506    9640 system_pods.go:59] 7 kube-system pods found
	I0807 17:56:21.977566    9640 system_pods.go:61] "coredns-7db6d8ff4d-wwrwt" [5ddd5aa0-8aab-423c-855d-b8ea1633db28] Running
	I0807 17:56:21.977566    9640 system_pods.go:61] "etcd-functional-100700" [a5c7f27d-b19f-437f-a53a-2bf11d9ac9ca] Running
	I0807 17:56:21.977566    9640 system_pods.go:61] "kube-apiserver-functional-100700" [5249a821-fdb0-4a53-9e1e-ff9336ba130f] Running
	I0807 17:56:21.977566    9640 system_pods.go:61] "kube-controller-manager-functional-100700" [1f265aaa-5e7b-41b5-9fa6-ce6a907b24a9] Running
	I0807 17:56:21.977566    9640 system_pods.go:61] "kube-proxy-fhgrj" [7777c5e7-cff4-448e-9880-a3b6c6264025] Running
	I0807 17:56:21.977566    9640 system_pods.go:61] "kube-scheduler-functional-100700" [1dbe4230-2246-468f-abe1-077025453579] Running
	I0807 17:56:21.977566    9640 system_pods.go:61] "storage-provisioner" [6b73faee-4244-4a09-840f-e9d22cedafe6] Running
	I0807 17:56:21.977566    9640 system_pods.go:74] duration metric: took 152.8132ms to wait for pod list to return data ...
	I0807 17:56:21.977661    9640 default_sa.go:34] waiting for default service account to be created ...
	I0807 17:56:22.151791    9640 request.go:629] Waited for 173.86ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.235.211:8441/api/v1/namespaces/default/serviceaccounts
	I0807 17:56:22.151862    9640 round_trippers.go:463] GET https://172.28.235.211:8441/api/v1/namespaces/default/serviceaccounts
	I0807 17:56:22.151862    9640 round_trippers.go:469] Request Headers:
	I0807 17:56:22.151862    9640 round_trippers.go:473]     Accept: application/json, */*
	I0807 17:56:22.151862    9640 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 17:56:22.155129    9640 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 17:56:22.155129    9640 round_trippers.go:577] Response Headers:
	I0807 17:56:22.155559    9640 round_trippers.go:580]     Audit-Id: 19e27914-9ca4-447d-81eb-0ee4389751b7
	I0807 17:56:22.155559    9640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 17:56:22.155559    9640 round_trippers.go:580]     Content-Type: application/json
	I0807 17:56:22.155559    9640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 188f7f8b-bd84-432a-9372-91c329f39bfc
	I0807 17:56:22.155559    9640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 858f2d38-9d61-42b7-a235-674c73f98f65
	I0807 17:56:22.155559    9640 round_trippers.go:580]     Content-Length: 261
	I0807 17:56:22.155559    9640 round_trippers.go:580]     Date: Wed, 07 Aug 2024 17:56:22 GMT
	I0807 17:56:22.155559    9640 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"622"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"7d444e1b-c0d4-46f6-b4d1-5f16872ad0c7","resourceVersion":"347","creationTimestamp":"2024-08-07T17:53:38Z"}}]}
	I0807 17:56:22.155639    9640 default_sa.go:45] found service account: "default"
	I0807 17:56:22.155639    9640 default_sa.go:55] duration metric: took 177.9761ms for default service account to be created ...
	I0807 17:56:22.155639    9640 system_pods.go:116] waiting for k8s-apps to be running ...
	I0807 17:56:22.191970    9640 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:56:22.191970    9640 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:56:22.192516    9640 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:56:22.192516    9640 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:56:22.192690    9640 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0807 17:56:22.193337    9640 kapi.go:59] client config for functional-100700: &rest.Config{Host:"https://172.28.235.211:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-100700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-100700\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil),
CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1da64c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0807 17:56:22.194745    9640 addons.go:234] Setting addon default-storageclass=true in "functional-100700"
	W0807 17:56:22.194745    9640 addons.go:243] addon default-storageclass should already be in state true
	I0807 17:56:22.194745    9640 host.go:66] Checking if "functional-100700" exists ...
	I0807 17:56:22.195780    9640 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:56:22.198462    9640 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0807 17:56:22.201115    9640 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0807 17:56:22.201115    9640 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0807 17:56:22.201115    9640 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:56:22.356124    9640 request.go:629] Waited for 200.3462ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods
	I0807 17:56:22.356468    9640 round_trippers.go:463] GET https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods
	I0807 17:56:22.356468    9640 round_trippers.go:469] Request Headers:
	I0807 17:56:22.356468    9640 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 17:56:22.356468    9640 round_trippers.go:473]     Accept: application/json, */*
	I0807 17:56:22.363233    9640 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 17:56:22.363325    9640 round_trippers.go:577] Response Headers:
	I0807 17:56:22.363325    9640 round_trippers.go:580]     Content-Type: application/json
	I0807 17:56:22.363325    9640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 188f7f8b-bd84-432a-9372-91c329f39bfc
	I0807 17:56:22.363325    9640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 858f2d38-9d61-42b7-a235-674c73f98f65
	I0807 17:56:22.363325    9640 round_trippers.go:580]     Date: Wed, 07 Aug 2024 17:56:22 GMT
	I0807 17:56:22.363325    9640 round_trippers.go:580]     Audit-Id: 33c1cb2e-d3b5-4ceb-9a32-7cd40f46a7d9
	I0807 17:56:22.363325    9640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 17:56:22.364998    9640 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"622"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-wwrwt","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5ddd5aa0-8aab-423c-855d-b8ea1633db28","resourceVersion":"607","creationTimestamp":"2024-08-07T17:53:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"47e7e42e-7133-43ea-9d53-4694600f4ac1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T17:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"47e7e42e-7133-43ea-9d53-4694600f4ac1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 50141 chars]
	I0807 17:56:22.369126    9640 system_pods.go:86] 7 kube-system pods found
	I0807 17:56:22.369224    9640 system_pods.go:89] "coredns-7db6d8ff4d-wwrwt" [5ddd5aa0-8aab-423c-855d-b8ea1633db28] Running
	I0807 17:56:22.369224    9640 system_pods.go:89] "etcd-functional-100700" [a5c7f27d-b19f-437f-a53a-2bf11d9ac9ca] Running
	I0807 17:56:22.369224    9640 system_pods.go:89] "kube-apiserver-functional-100700" [5249a821-fdb0-4a53-9e1e-ff9336ba130f] Running
	I0807 17:56:22.369299    9640 system_pods.go:89] "kube-controller-manager-functional-100700" [1f265aaa-5e7b-41b5-9fa6-ce6a907b24a9] Running
	I0807 17:56:22.369299    9640 system_pods.go:89] "kube-proxy-fhgrj" [7777c5e7-cff4-448e-9880-a3b6c6264025] Running
	I0807 17:56:22.369299    9640 system_pods.go:89] "kube-scheduler-functional-100700" [1dbe4230-2246-468f-abe1-077025453579] Running
	I0807 17:56:22.369299    9640 system_pods.go:89] "storage-provisioner" [6b73faee-4244-4a09-840f-e9d22cedafe6] Running
	I0807 17:56:22.369399    9640 system_pods.go:126] duration metric: took 213.7576ms to wait for k8s-apps to be running ...
	I0807 17:56:22.369399    9640 system_svc.go:44] waiting for kubelet service to be running ....
	I0807 17:56:22.391285    9640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0807 17:56:22.417666    9640 system_svc.go:56] duration metric: took 48.2657ms WaitForService to wait for kubelet
	I0807 17:56:22.417666    9640 kubeadm.go:582] duration metric: took 2.5880301s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0807 17:56:22.417666    9640 node_conditions.go:102] verifying NodePressure condition ...
	I0807 17:56:22.562788    9640 request.go:629] Waited for 144.9255ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.235.211:8441/api/v1/nodes
	I0807 17:56:22.562788    9640 round_trippers.go:463] GET https://172.28.235.211:8441/api/v1/nodes
	I0807 17:56:22.562913    9640 round_trippers.go:469] Request Headers:
	I0807 17:56:22.562913    9640 round_trippers.go:473]     Accept: application/json, */*
	I0807 17:56:22.562913    9640 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 17:56:22.566573    9640 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 17:56:22.566573    9640 round_trippers.go:577] Response Headers:
	I0807 17:56:22.566573    9640 round_trippers.go:580]     Audit-Id: 441430d1-bb46-4252-8f69-4f126ad9e0ee
	I0807 17:56:22.566573    9640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 17:56:22.566573    9640 round_trippers.go:580]     Content-Type: application/json
	I0807 17:56:22.566573    9640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 188f7f8b-bd84-432a-9372-91c329f39bfc
	I0807 17:56:22.566573    9640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 858f2d38-9d61-42b7-a235-674c73f98f65
	I0807 17:56:22.566573    9640 round_trippers.go:580]     Date: Wed, 07 Aug 2024 17:56:22 GMT
	I0807 17:56:22.567661    9640 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"622"},"items":[{"metadata":{"name":"functional-100700","uid":"bf51b3aa-e152-4402-bf4d-a2293e078735","resourceVersion":"538","creationTimestamp":"2024-08-07T17:53:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-100700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"functional-100700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T17_53_24_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedF
ields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","ti [truncated 4841 chars]
	I0807 17:56:22.568103    9640 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0807 17:56:22.568197    9640 node_conditions.go:123] node cpu capacity is 2
	I0807 17:56:22.568197    9640 node_conditions.go:105] duration metric: took 150.5296ms to run NodePressure ...
	I0807 17:56:22.568197    9640 start.go:241] waiting for startup goroutines ...
	I0807 17:56:24.522668    9640 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:56:24.522668    9640 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:56:24.522798    9640 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0807 17:56:24.522798    9640 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0807 17:56:24.523083    9640 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:56:24.526421    9640 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:56:24.526421    9640 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:56:24.526965    9640 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:56:26.885029    9640 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:56:26.885855    9640 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:56:26.885855    9640 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:56:27.305378    9640 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:56:27.305378    9640 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:56:27.306224    9640 sshutil.go:53] new ssh client: &{IP:172.28.235.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-100700\id_rsa Username:docker}
	I0807 17:56:27.447175    9640 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0807 17:56:28.300334    9640 command_runner.go:130] > serviceaccount/storage-provisioner unchanged
	I0807 17:56:28.301335    9640 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner unchanged
	I0807 17:56:28.301335    9640 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0807 17:56:28.301335    9640 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0807 17:56:28.301335    9640 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath unchanged
	I0807 17:56:28.301335    9640 command_runner.go:130] > pod/storage-provisioner configured
	I0807 17:56:29.519660    9640 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:56:29.520576    9640 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:56:29.521324    9640 sshutil.go:53] new ssh client: &{IP:172.28.235.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-100700\id_rsa Username:docker}
	I0807 17:56:29.654011    9640 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0807 17:56:29.824134    9640 command_runner.go:130] > storageclass.storage.k8s.io/standard unchanged
	I0807 17:56:29.824466    9640 round_trippers.go:463] GET https://172.28.235.211:8441/apis/storage.k8s.io/v1/storageclasses
	I0807 17:56:29.824541    9640 round_trippers.go:469] Request Headers:
	I0807 17:56:29.824541    9640 round_trippers.go:473]     Accept: application/json, */*
	I0807 17:56:29.824541    9640 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 17:56:29.827898    9640 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 17:56:29.827898    9640 round_trippers.go:577] Response Headers:
	I0807 17:56:29.827898    9640 round_trippers.go:580]     Audit-Id: 36c54825-477d-43e6-bbf7-b4e6c5fd43d7
	I0807 17:56:29.827898    9640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 17:56:29.827984    9640 round_trippers.go:580]     Content-Type: application/json
	I0807 17:56:29.827984    9640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 188f7f8b-bd84-432a-9372-91c329f39bfc
	I0807 17:56:29.827984    9640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 858f2d38-9d61-42b7-a235-674c73f98f65
	I0807 17:56:29.827984    9640 round_trippers.go:580]     Content-Length: 1273
	I0807 17:56:29.827984    9640 round_trippers.go:580]     Date: Wed, 07 Aug 2024 17:56:29 GMT
	I0807 17:56:29.828107    9640 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"628"},"items":[{"metadata":{"name":"standard","uid":"7d7feaf4-4e1a-4662-b37d-4da681fc2d46","resourceVersion":"429","creationTimestamp":"2024-08-07T17:53:49Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-08-07T17:53:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0807 17:56:29.828704    9640 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"7d7feaf4-4e1a-4662-b37d-4da681fc2d46","resourceVersion":"429","creationTimestamp":"2024-08-07T17:53:49Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-08-07T17:53:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0807 17:56:29.828704    9640 round_trippers.go:463] PUT https://172.28.235.211:8441/apis/storage.k8s.io/v1/storageclasses/standard
	I0807 17:56:29.828704    9640 round_trippers.go:469] Request Headers:
	I0807 17:56:29.828704    9640 round_trippers.go:473]     Content-Type: application/json
	I0807 17:56:29.828704    9640 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 17:56:29.828704    9640 round_trippers.go:473]     Accept: application/json, */*
	I0807 17:56:29.832303    9640 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 17:56:29.833308    9640 round_trippers.go:577] Response Headers:
	I0807 17:56:29.833308    9640 round_trippers.go:580]     Content-Length: 1220
	I0807 17:56:29.833308    9640 round_trippers.go:580]     Date: Wed, 07 Aug 2024 17:56:29 GMT
	I0807 17:56:29.833308    9640 round_trippers.go:580]     Audit-Id: 448638b4-be15-4243-b633-1bb32ddf24e7
	I0807 17:56:29.833380    9640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 17:56:29.833380    9640 round_trippers.go:580]     Content-Type: application/json
	I0807 17:56:29.833380    9640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 188f7f8b-bd84-432a-9372-91c329f39bfc
	I0807 17:56:29.833414    9640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 858f2d38-9d61-42b7-a235-674c73f98f65
	I0807 17:56:29.833558    9640 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"7d7feaf4-4e1a-4662-b37d-4da681fc2d46","resourceVersion":"429","creationTimestamp":"2024-08-07T17:53:49Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-08-07T17:53:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0807 17:56:29.837450    9640 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0807 17:56:29.840627    9640 addons.go:510] duration metric: took 10.010873s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0807 17:56:29.840773    9640 start.go:246] waiting for cluster config update ...
	I0807 17:56:29.840773    9640 start.go:255] writing updated cluster config ...
	I0807 17:56:29.852400    9640 ssh_runner.go:195] Run: rm -f paused
	I0807 17:56:29.991010    9640 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0807 17:56:29.995465    9640 out.go:177] * Done! kubectl is now configured to use "functional-100700" cluster and "default" namespace by default
	
	
	==> Docker <==
	Aug 07 17:56:08 functional-100700 dockerd[4437]: time="2024-08-07T17:56:08.345440953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:08 functional-100700 dockerd[4437]: time="2024-08-07T17:56:08.346309071Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:08 functional-100700 dockerd[4437]: time="2024-08-07T17:56:08.427619805Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:08 functional-100700 dockerd[4437]: time="2024-08-07T17:56:08.427935011Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:08 functional-100700 dockerd[4437]: time="2024-08-07T17:56:08.427958412Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:08 functional-100700 dockerd[4437]: time="2024-08-07T17:56:08.428175716Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:08 functional-100700 dockerd[4437]: time="2024-08-07T17:56:08.450251060Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:08 functional-100700 dockerd[4437]: time="2024-08-07T17:56:08.450326762Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:08 functional-100700 dockerd[4437]: time="2024-08-07T17:56:08.450344662Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:08 functional-100700 dockerd[4437]: time="2024-08-07T17:56:08.450438364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:08 functional-100700 cri-dockerd[4699]: time="2024-08-07T17:56:08Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ee73743ff4167ab913696e97e9a77ab9a2b23dba89c1348473bb28a7089d45c2/resolv.conf as [nameserver 172.28.224.1]"
	Aug 07 17:56:08 functional-100700 cri-dockerd[4699]: time="2024-08-07T17:56:08Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/241a3e17f71d6549f6693cc1faa13d1a82113d0126d862a38ff582ea671e7704/resolv.conf as [nameserver 172.28.224.1]"
	Aug 07 17:56:08 functional-100700 cri-dockerd[4699]: time="2024-08-07T17:56:08Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0a22b12bc48412adcefcda1aa5f0176acba34d5ef617a5fc6e81727d4bf2fb9f/resolv.conf as [nameserver 172.28.224.1]"
	Aug 07 17:56:09 functional-100700 dockerd[4437]: time="2024-08-07T17:56:09.021378960Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:09 functional-100700 dockerd[4437]: time="2024-08-07T17:56:09.021447242Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:09 functional-100700 dockerd[4437]: time="2024-08-07T17:56:09.021467036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:09 functional-100700 dockerd[4437]: time="2024-08-07T17:56:09.021664985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:09 functional-100700 dockerd[4437]: time="2024-08-07T17:56:09.032269201Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:09 functional-100700 dockerd[4437]: time="2024-08-07T17:56:09.032481345Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:09 functional-100700 dockerd[4437]: time="2024-08-07T17:56:09.033742514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:09 functional-100700 dockerd[4437]: time="2024-08-07T17:56:09.034300967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:09 functional-100700 dockerd[4437]: time="2024-08-07T17:56:09.230710505Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:09 functional-100700 dockerd[4437]: time="2024-08-07T17:56:09.231303050Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:09 functional-100700 dockerd[4437]: time="2024-08-07T17:56:09.231404523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:09 functional-100700 dockerd[4437]: time="2024-08-07T17:56:09.231887696Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3bc20896cf9b1       cbb01a7bd410d       2 minutes ago       Running             coredns                   1                   0a22b12bc4841       coredns-7db6d8ff4d-wwrwt
	a781cd4bdb895       55bb025d2cfa5       2 minutes ago       Running             kube-proxy                1                   241a3e17f71d6       kube-proxy-fhgrj
	60d38309b3f44       6e38f40d628db       2 minutes ago       Running             storage-provisioner       1                   ee73743ff4167       storage-provisioner
	333ea0f6bde6b       76932a3b37d7e       2 minutes ago       Running             kube-controller-manager   1                   125a3650c7bb9       kube-controller-manager-functional-100700
	ceb9a86ed09cc       1f6d574d502f3       2 minutes ago       Running             kube-apiserver            1                   ff33a3021f789       kube-apiserver-functional-100700
	d57a72e940a3a       3edc18e7b7672       2 minutes ago       Running             kube-scheduler            1                   b39dd51107644       kube-scheduler-functional-100700
	623399e23aa3b       3861cfcd7c04c       2 minutes ago       Running             etcd                      1                   011ec8239aa1c       etcd-functional-100700
	1ca5873cb027b       6e38f40d628db       4 minutes ago       Exited              storage-provisioner       0                   32e3bea2a9315       storage-provisioner
	8257548df8d0d       cbb01a7bd410d       4 minutes ago       Exited              coredns                   0                   a334b1535e2f7       coredns-7db6d8ff4d-wwrwt
	9f7b90986285c       55bb025d2cfa5       4 minutes ago       Exited              kube-proxy                0                   4f7e1db775dc2       kube-proxy-fhgrj
	76120dfe1c32e       1f6d574d502f3       5 minutes ago       Exited              kube-apiserver            0                   b9283200bae35       kube-apiserver-functional-100700
	88ef6e03a7d49       76932a3b37d7e       5 minutes ago       Exited              kube-controller-manager   0                   f87ac0281bc26       kube-controller-manager-functional-100700
	03079679d68cc       3edc18e7b7672       5 minutes ago       Exited              kube-scheduler            0                   6f09e37137542       kube-scheduler-functional-100700
	8e6d65d222dda       3861cfcd7c04c       5 minutes ago       Exited              etcd                      0                   f907706c00ebe       etcd-functional-100700
	
	
	==> coredns [3bc20896cf9b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 61f4d0960164fdf8d8157aaa96d041acf5b29f3c98ba802d705114162ff9f2cc889bbb973f9b8023f3112734912ee6f4eadc4faa21115183d5697de30dae3805
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:59693 - 19012 "HINFO IN 18429329265720740.2191761515672659795. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.053241723s
	
	
	==> coredns [8257548df8d0] <==
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[553389830]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (07-Aug-2024 17:53:41.209) (total time: 30000ms):
	Trace[553389830]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (17:54:11.209)
	Trace[553389830]: [30.000640183s] [30.000640183s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1630496973]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (07-Aug-2024 17:53:41.209) (total time: 30000ms):
	Trace[1630496973]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (17:54:11.210)
	Trace[1630496973]: [30.000904622s] [30.000904622s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[915081878]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (07-Aug-2024 17:53:41.208) (total time: 30002ms):
	Trace[915081878]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (17:54:11.209)
	Trace[915081878]: [30.00210535s] [30.00210535s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 61f4d0960164fdf8d8157aaa96d041acf5b29f3c98ba802d705114162ff9f2cc889bbb973f9b8023f3112734912ee6f4eadc4faa21115183d5697de30dae3805
	[INFO] Reloading complete
	[INFO] 127.0.0.1:45239 - 56415 "HINFO IN 1532035815588090643.4324461608647671638. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.136618729s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-100700
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-100700
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e
	                    minikube.k8s.io/name=functional-100700
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_07T17_53_24_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 07 Aug 2024 17:53:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-100700
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 07 Aug 2024 17:58:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 07 Aug 2024 17:58:09 +0000   Wed, 07 Aug 2024 17:53:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 07 Aug 2024 17:58:09 +0000   Wed, 07 Aug 2024 17:53:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 07 Aug 2024 17:58:09 +0000   Wed, 07 Aug 2024 17:53:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 07 Aug 2024 17:58:09 +0000   Wed, 07 Aug 2024 17:53:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.28.235.211
	  Hostname:    functional-100700
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912864Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912864Ki
	  pods:               110
	System Info:
	  Machine ID:                 f2c236e17bfe41ecb74a7a4db678ba17
	  System UUID:                5fd3904b-92a6-b943-8c9f-cc4a700bb9cc
	  Boot ID:                    fa695260-1a6f-4ecb-a39f-d253dae88f0e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-wwrwt                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m39s
	  kube-system                 etcd-functional-100700                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m54s
	  kube-system                 kube-apiserver-functional-100700             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m56s
	  kube-system                 kube-controller-manager-functional-100700    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m54s
	  kube-system                 kube-proxy-fhgrj                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m39s
	  kube-system                 kube-scheduler-functional-100700             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m54s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m37s                  kube-proxy       
	  Normal  Starting                 2m9s                   kube-proxy       
	  Normal  Starting                 5m3s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m3s (x8 over 5m3s)    kubelet          Node functional-100700 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m3s (x8 over 5m3s)    kubelet          Node functional-100700 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m3s (x7 over 5m3s)    kubelet          Node functional-100700 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m3s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m55s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m55s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    4m54s                  kubelet          Node functional-100700 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m54s                  kubelet          Node functional-100700 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  4m54s                  kubelet          Node functional-100700 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                4m50s                  kubelet          Node functional-100700 status is now: NodeReady
	  Normal  RegisteredNode           4m40s                  node-controller  Node functional-100700 event: Registered Node functional-100700 in Controller
	  Normal  Starting                 2m17s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m17s (x8 over 2m17s)  kubelet          Node functional-100700 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m17s (x8 over 2m17s)  kubelet          Node functional-100700 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m17s (x7 over 2m17s)  kubelet          Node functional-100700 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           118s                   node-controller  Node functional-100700 event: Registered Node functional-100700 in Controller
	
	
	==> dmesg <==
	[  +5.459256] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.851003] systemd-fstab-generator[1681]: Ignoring "noauto" option for root device
	[  +7.326983] systemd-fstab-generator[1887]: Ignoring "noauto" option for root device
	[  +0.116697] kauditd_printk_skb: 48 callbacks suppressed
	[  +8.558256] systemd-fstab-generator[2286]: Ignoring "noauto" option for root device
	[  +0.149197] kauditd_printk_skb: 62 callbacks suppressed
	[ +15.852357] systemd-fstab-generator[2536]: Ignoring "noauto" option for root device
	[  +0.197291] kauditd_printk_skb: 12 callbacks suppressed
	[  +8.405097] kauditd_printk_skb: 88 callbacks suppressed
	[Aug 7 17:54] kauditd_printk_skb: 10 callbacks suppressed
	[Aug 7 17:55] systemd-fstab-generator[3950]: Ignoring "noauto" option for root device
	[  +0.649998] systemd-fstab-generator[3985]: Ignoring "noauto" option for root device
	[  +0.264046] systemd-fstab-generator[3997]: Ignoring "noauto" option for root device
	[  +0.315437] systemd-fstab-generator[4011]: Ignoring "noauto" option for root device
	[  +5.331377] kauditd_printk_skb: 89 callbacks suppressed
	[  +8.070413] systemd-fstab-generator[4649]: Ignoring "noauto" option for root device
	[  +0.207492] systemd-fstab-generator[4660]: Ignoring "noauto" option for root device
	[  +0.210288] systemd-fstab-generator[4672]: Ignoring "noauto" option for root device
	[  +0.283847] systemd-fstab-generator[4687]: Ignoring "noauto" option for root device
	[  +0.989188] systemd-fstab-generator[4863]: Ignoring "noauto" option for root device
	[Aug 7 17:56] systemd-fstab-generator[4990]: Ignoring "noauto" option for root device
	[  +0.108824] kauditd_printk_skb: 137 callbacks suppressed
	[  +6.503684] kauditd_printk_skb: 52 callbacks suppressed
	[ +11.988991] systemd-fstab-generator[5884]: Ignoring "noauto" option for root device
	[  +0.145733] kauditd_printk_skb: 31 callbacks suppressed
	
	
	==> etcd [623399e23aa3] <==
	{"level":"info","ts":"2024-08-07T17:56:03.36382Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-07T17:56:03.363832Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-07T17:56:03.367016Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dfcc4f7f4794fed switched to configuration voters=(4466661499682246637)"}
	{"level":"info","ts":"2024-08-07T17:56:03.368477Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"4d6ccf4a78e670a9","local-member-id":"3dfcc4f7f4794fed","added-peer-id":"3dfcc4f7f4794fed","added-peer-peer-urls":["https://172.28.235.211:2380"]}
	{"level":"info","ts":"2024-08-07T17:56:03.368573Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"4d6ccf4a78e670a9","local-member-id":"3dfcc4f7f4794fed","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-07T17:56:03.368602Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-07T17:56:03.381101Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-07T17:56:03.382356Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.28.235.211:2380"}
	{"level":"info","ts":"2024-08-07T17:56:03.382687Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.28.235.211:2380"}
	{"level":"info","ts":"2024-08-07T17:56:03.387698Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"3dfcc4f7f4794fed","initial-advertise-peer-urls":["https://172.28.235.211:2380"],"listen-peer-urls":["https://172.28.235.211:2380"],"advertise-client-urls":["https://172.28.235.211:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.28.235.211:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-07T17:56:03.387755Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-07T17:56:04.972741Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dfcc4f7f4794fed is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-07T17:56:04.973173Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dfcc4f7f4794fed became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-07T17:56:04.973723Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dfcc4f7f4794fed received MsgPreVoteResp from 3dfcc4f7f4794fed at term 2"}
	{"level":"info","ts":"2024-08-07T17:56:04.97405Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dfcc4f7f4794fed became candidate at term 3"}
	{"level":"info","ts":"2024-08-07T17:56:04.974209Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dfcc4f7f4794fed received MsgVoteResp from 3dfcc4f7f4794fed at term 3"}
	{"level":"info","ts":"2024-08-07T17:56:04.974424Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dfcc4f7f4794fed became leader at term 3"}
	{"level":"info","ts":"2024-08-07T17:56:04.97456Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 3dfcc4f7f4794fed elected leader 3dfcc4f7f4794fed at term 3"}
	{"level":"info","ts":"2024-08-07T17:56:04.982322Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"3dfcc4f7f4794fed","local-member-attributes":"{Name:functional-100700 ClientURLs:[https://172.28.235.211:2379]}","request-path":"/0/members/3dfcc4f7f4794fed/attributes","cluster-id":"4d6ccf4a78e670a9","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-07T17:56:04.982379Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-07T17:56:04.982862Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-07T17:56:04.986522Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.28.235.211:2379"}
	{"level":"info","ts":"2024-08-07T17:56:04.989971Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-07T17:56:04.998053Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-07T17:56:04.99824Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> etcd [8e6d65d222dd] <==
	{"level":"info","ts":"2024-08-07T17:53:17.241258Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dfcc4f7f4794fed became leader at term 2"}
	{"level":"info","ts":"2024-08-07T17:53:17.241267Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 3dfcc4f7f4794fed elected leader 3dfcc4f7f4794fed at term 2"}
	{"level":"info","ts":"2024-08-07T17:53:17.248461Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-07T17:53:17.252444Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"3dfcc4f7f4794fed","local-member-attributes":"{Name:functional-100700 ClientURLs:[https://172.28.235.211:2379]}","request-path":"/0/members/3dfcc4f7f4794fed/attributes","cluster-id":"4d6ccf4a78e670a9","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-07T17:53:17.25261Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-07T17:53:17.252893Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-07T17:53:17.254225Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-07T17:53:17.254298Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-07T17:53:17.25482Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"4d6ccf4a78e670a9","local-member-id":"3dfcc4f7f4794fed","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-07T17:53:17.254914Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-07T17:53:17.254934Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-07T17:53:17.256756Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.28.235.211:2379"}
	{"level":"info","ts":"2024-08-07T17:53:17.257404Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-07T17:55:42.977793Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-07T17:55:42.97785Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"functional-100700","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://172.28.235.211:2380"],"advertise-client-urls":["https://172.28.235.211:2379"]}
	{"level":"warn","ts":"2024-08-07T17:55:42.978062Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-07T17:55:42.978184Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	2024/08/07 17:55:42 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/08/07 17:55:42 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-07T17:55:43.056155Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 172.28.235.211:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-07T17:55:43.056224Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 172.28.235.211:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-07T17:55:43.056336Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"3dfcc4f7f4794fed","current-leader-member-id":"3dfcc4f7f4794fed"}
	{"level":"info","ts":"2024-08-07T17:55:43.075689Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"172.28.235.211:2380"}
	{"level":"info","ts":"2024-08-07T17:55:43.076058Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"172.28.235.211:2380"}
	{"level":"info","ts":"2024-08-07T17:55:43.076079Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"functional-100700","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://172.28.235.211:2380"],"advertise-client-urls":["https://172.28.235.211:2379"]}
	
	
	==> kernel <==
	 17:58:18 up 7 min,  0 users,  load average: 0.23, 0.47, 0.25
	Linux functional-100700 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [76120dfe1c32] <==
	W0807 17:55:52.135726       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0807 17:55:52.151739       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0807 17:55:52.160711       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0807 17:55:52.271409       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0807 17:55:52.294203       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0807 17:55:52.296105       1 logging.go:59] [core] [Channel #18 SubChannel #19] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0807 17:55:52.314914       1 logging.go:59] [core] [Channel #157 SubChannel #158] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0807 17:55:52.331769       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0807 17:55:52.427094       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0807 17:55:52.454900       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0807 17:55:52.464964       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0807 17:55:52.546730       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0807 17:55:52.573537       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0807 17:55:52.614131       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0807 17:55:52.631176       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0807 17:55:52.680351       1 logging.go:59] [core] [Channel #178 SubChannel #179] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0807 17:55:52.741177       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0807 17:55:52.748350       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0807 17:55:52.858594       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0807 17:55:52.859439       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0807 17:55:52.909486       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0807 17:55:52.925193       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0807 17:55:52.928088       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0807 17:55:52.943827       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0807 17:55:52.958363       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [ceb9a86ed09c] <==
	I0807 17:56:06.788194       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0807 17:56:06.803467       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0807 17:56:06.803485       1 policy_source.go:224] refreshing policies
	I0807 17:56:06.813698       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0807 17:56:06.814162       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0807 17:56:06.814344       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0807 17:56:06.813721       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0807 17:56:06.815611       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0807 17:56:06.816469       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0807 17:56:06.817463       1 aggregator.go:165] initial CRD sync complete...
	I0807 17:56:06.817693       1 autoregister_controller.go:141] Starting autoregister controller
	I0807 17:56:06.817963       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0807 17:56:06.818156       1 cache.go:39] Caches are synced for autoregister controller
	I0807 17:56:06.813732       1 shared_informer.go:320] Caches are synced for configmaps
	I0807 17:56:06.822713       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0807 17:56:06.856940       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0807 17:56:07.619216       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0807 17:56:08.579395       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.28.235.211]
	I0807 17:56:08.586105       1 controller.go:615] quota admission added evaluator for: endpoints
	I0807 17:56:09.383724       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0807 17:56:09.418896       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0807 17:56:09.560788       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0807 17:56:09.641827       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0807 17:56:09.652186       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0807 17:56:20.103470       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [333ea0f6bde6] <==
	I0807 17:56:19.905785       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0807 17:56:19.912933       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0807 17:56:20.003761       1 shared_informer.go:320] Caches are synced for resource quota
	I0807 17:56:20.040937       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"functional-100700\" does not exist"
	I0807 17:56:20.071286       1 shared_informer.go:320] Caches are synced for attach detach
	I0807 17:56:20.072453       1 shared_informer.go:320] Caches are synced for node
	I0807 17:56:20.073188       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0807 17:56:20.073406       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0807 17:56:20.073709       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0807 17:56:20.074168       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0807 17:56:20.074728       1 shared_informer.go:320] Caches are synced for taint
	I0807 17:56:20.075078       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0807 17:56:20.075421       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-100700"
	I0807 17:56:20.075744       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0807 17:56:20.076846       1 shared_informer.go:320] Caches are synced for disruption
	I0807 17:56:20.079525       1 shared_informer.go:320] Caches are synced for resource quota
	I0807 17:56:20.084846       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0807 17:56:20.099132       1 shared_informer.go:320] Caches are synced for TTL
	I0807 17:56:20.109314       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0807 17:56:20.114165       1 shared_informer.go:320] Caches are synced for GC
	I0807 17:56:20.120236       1 shared_informer.go:320] Caches are synced for daemon sets
	I0807 17:56:20.139408       1 shared_informer.go:320] Caches are synced for persistent volume
	I0807 17:56:20.543907       1 shared_informer.go:320] Caches are synced for garbage collector
	I0807 17:56:20.544041       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0807 17:56:20.555057       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-controller-manager [88ef6e03a7d4] <==
	I0807 17:53:38.184686       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0807 17:53:38.228672       1 shared_informer.go:320] Caches are synced for taint
	I0807 17:53:38.228814       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0807 17:53:38.228871       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-100700"
	I0807 17:53:38.228902       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0807 17:53:38.229060       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0807 17:53:38.235078       1 shared_informer.go:320] Caches are synced for daemon sets
	I0807 17:53:38.680030       1 shared_informer.go:320] Caches are synced for garbage collector
	I0807 17:53:38.680469       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0807 17:53:38.736542       1 shared_informer.go:320] Caches are synced for garbage collector
	I0807 17:53:39.200261       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="296.018867ms"
	I0807 17:53:39.232224       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="30.184469ms"
	I0807 17:53:39.233325       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="30.5µs"
	I0807 17:53:39.233521       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="27.001µs"
	I0807 17:53:40.563337       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="88.601344ms"
	I0807 17:53:40.607469       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="44.090819ms"
	I0807 17:53:40.607570       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="67.201µs"
	I0807 17:53:41.721869       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="43.713µs"
	I0807 17:53:41.786079       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="114.333µs"
	I0807 17:53:51.557177       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="70.718µs"
	I0807 17:53:51.879212       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="194.649µs"
	I0807 17:53:51.912106       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="33.708µs"
	I0807 17:53:51.918145       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="38.009µs"
	I0807 17:54:19.538599       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="23.547711ms"
	I0807 17:54:19.540217       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="1.562927ms"
	
	
	==> kube-proxy [9f7b90986285] <==
	I0807 17:53:40.772347       1 server_linux.go:69] "Using iptables proxy"
	I0807 17:53:40.794492       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.28.235.211"]
	I0807 17:53:40.862282       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0807 17:53:40.862359       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0807 17:53:40.862381       1 server_linux.go:165] "Using iptables Proxier"
	I0807 17:53:40.868891       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0807 17:53:40.869293       1 server.go:872] "Version info" version="v1.30.3"
	I0807 17:53:40.869310       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0807 17:53:40.871221       1 config.go:192] "Starting service config controller"
	I0807 17:53:40.871282       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0807 17:53:40.871311       1 config.go:101] "Starting endpoint slice config controller"
	I0807 17:53:40.871334       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0807 17:53:40.872179       1 config.go:319] "Starting node config controller"
	I0807 17:53:40.872231       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0807 17:53:40.972444       1 shared_informer.go:320] Caches are synced for node config
	I0807 17:53:40.972873       1 shared_informer.go:320] Caches are synced for service config
	I0807 17:53:40.973042       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [a781cd4bdb89] <==
	I0807 17:56:09.381390       1 server_linux.go:69] "Using iptables proxy"
	I0807 17:56:09.417860       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.28.235.211"]
	I0807 17:56:09.532860       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0807 17:56:09.532933       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0807 17:56:09.532952       1 server_linux.go:165] "Using iptables Proxier"
	I0807 17:56:09.540653       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0807 17:56:09.541274       1 server.go:872] "Version info" version="v1.30.3"
	I0807 17:56:09.542377       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0807 17:56:09.545944       1 config.go:192] "Starting service config controller"
	I0807 17:56:09.547694       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0807 17:56:09.547912       1 config.go:101] "Starting endpoint slice config controller"
	I0807 17:56:09.548059       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0807 17:56:09.548899       1 config.go:319] "Starting node config controller"
	I0807 17:56:09.551116       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0807 17:56:09.648372       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0807 17:56:09.648607       1 shared_informer.go:320] Caches are synced for service config
	I0807 17:56:09.652303       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [03079679d68c] <==
	W0807 17:53:21.742837       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0807 17:53:21.742972       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0807 17:53:21.849509       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0807 17:53:21.850015       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0807 17:53:21.873933       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0807 17:53:21.874264       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0807 17:53:21.922681       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0807 17:53:21.922727       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0807 17:53:21.929627       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0807 17:53:21.930723       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0807 17:53:21.951515       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0807 17:53:21.951560       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0807 17:53:21.957878       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0807 17:53:21.957944       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0807 17:53:22.045426       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0807 17:53:22.045855       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0807 17:53:22.051226       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0807 17:53:22.051257       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0807 17:53:22.071491       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0807 17:53:22.071658       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0807 17:53:23.971586       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0807 17:55:42.929866       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0807 17:55:42.929966       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0807 17:55:42.930247       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0807 17:55:42.930650       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [d57a72e940a3] <==
	I0807 17:56:05.120676       1 serving.go:380] Generated self-signed cert in-memory
	W0807 17:56:06.706252       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0807 17:56:06.706675       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0807 17:56:06.706935       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0807 17:56:06.707177       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0807 17:56:06.768204       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0807 17:56:06.768578       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0807 17:56:06.772866       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0807 17:56:06.773411       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0807 17:56:06.776063       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0807 17:56:06.773428       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0807 17:56:06.879202       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 07 17:56:03 functional-100700 kubelet[4998]: E0807 17:56:03.180415    4998 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8441/api/v1/nodes\": dial tcp 172.28.235.211:8441: connect: connection refused" node="functional-100700"
	Aug 07 17:56:04 functional-100700 kubelet[4998]: I0807 17:56:04.782536    4998 kubelet_node_status.go:73] "Attempting to register node" node="functional-100700"
	Aug 07 17:56:06 functional-100700 kubelet[4998]: I0807 17:56:06.896690    4998 kubelet_node_status.go:112] "Node was previously registered" node="functional-100700"
	Aug 07 17:56:06 functional-100700 kubelet[4998]: I0807 17:56:06.897350    4998 kubelet_node_status.go:76] "Successfully registered node" node="functional-100700"
	Aug 07 17:56:06 functional-100700 kubelet[4998]: I0807 17:56:06.898916    4998 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 07 17:56:06 functional-100700 kubelet[4998]: I0807 17:56:06.899910    4998 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 07 17:56:07 functional-100700 kubelet[4998]: E0807 17:56:07.076730    4998 kubelet.go:1937] "Failed creating a mirror pod for" err="pods \"kube-apiserver-functional-100700\" already exists" pod="kube-system/kube-apiserver-functional-100700"
	Aug 07 17:56:07 functional-100700 kubelet[4998]: I0807 17:56:07.644845    4998 apiserver.go:52] "Watching apiserver"
	Aug 07 17:56:07 functional-100700 kubelet[4998]: I0807 17:56:07.650546    4998 topology_manager.go:215] "Topology Admit Handler" podUID="7777c5e7-cff4-448e-9880-a3b6c6264025" podNamespace="kube-system" podName="kube-proxy-fhgrj"
	Aug 07 17:56:07 functional-100700 kubelet[4998]: I0807 17:56:07.650920    4998 topology_manager.go:215] "Topology Admit Handler" podUID="5ddd5aa0-8aab-423c-855d-b8ea1633db28" podNamespace="kube-system" podName="coredns-7db6d8ff4d-wwrwt"
	Aug 07 17:56:07 functional-100700 kubelet[4998]: I0807 17:56:07.651155    4998 topology_manager.go:215] "Topology Admit Handler" podUID="6b73faee-4244-4a09-840f-e9d22cedafe6" podNamespace="kube-system" podName="storage-provisioner"
	Aug 07 17:56:07 functional-100700 kubelet[4998]: I0807 17:56:07.664180    4998 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Aug 07 17:56:07 functional-100700 kubelet[4998]: I0807 17:56:07.745666    4998 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7777c5e7-cff4-448e-9880-a3b6c6264025-xtables-lock\") pod \"kube-proxy-fhgrj\" (UID: \"7777c5e7-cff4-448e-9880-a3b6c6264025\") " pod="kube-system/kube-proxy-fhgrj"
	Aug 07 17:56:07 functional-100700 kubelet[4998]: I0807 17:56:07.746470    4998 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/6b73faee-4244-4a09-840f-e9d22cedafe6-tmp\") pod \"storage-provisioner\" (UID: \"6b73faee-4244-4a09-840f-e9d22cedafe6\") " pod="kube-system/storage-provisioner"
	Aug 07 17:56:07 functional-100700 kubelet[4998]: I0807 17:56:07.746940    4998 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7777c5e7-cff4-448e-9880-a3b6c6264025-lib-modules\") pod \"kube-proxy-fhgrj\" (UID: \"7777c5e7-cff4-448e-9880-a3b6c6264025\") " pod="kube-system/kube-proxy-fhgrj"
	Aug 07 17:57:01 functional-100700 kubelet[4998]: E0807 17:57:01.769502    4998 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 07 17:57:01 functional-100700 kubelet[4998]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 07 17:57:01 functional-100700 kubelet[4998]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 07 17:57:01 functional-100700 kubelet[4998]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 07 17:57:01 functional-100700 kubelet[4998]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 07 17:58:01 functional-100700 kubelet[4998]: E0807 17:58:01.763650    4998 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 07 17:58:01 functional-100700 kubelet[4998]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 07 17:58:01 functional-100700 kubelet[4998]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 07 17:58:01 functional-100700 kubelet[4998]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 07 17:58:01 functional-100700 kubelet[4998]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [1ca5873cb027] <==
	I0807 17:53:48.449375       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0807 17:53:48.464841       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0807 17:53:48.464909       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0807 17:53:48.488946       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0807 17:53:48.489239       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-100700_5c6e74c7-fc70-4a34-bd7c-e7f4694755b4!
	I0807 17:53:48.492709       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d24b6e37-a747-4c9b-8fa2-66f5f5caf4cb", APIVersion:"v1", ResourceVersion:"425", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-100700_5c6e74c7-fc70-4a34-bd7c-e7f4694755b4 became leader
	I0807 17:53:48.590053       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-100700_5c6e74c7-fc70-4a34-bd7c-e7f4694755b4!
	
	
	==> storage-provisioner [60d38309b3f4] <==
	I0807 17:56:09.313135       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0807 17:56:09.338619       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0807 17:56:09.340575       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0807 17:56:26.768201       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0807 17:56:26.769357       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-100700_234b600c-6bb7-48ca-9602-3e8952de8069!
	I0807 17:56:26.770295       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d24b6e37-a747-4c9b-8fa2-66f5f5caf4cb", APIVersion:"v1", ResourceVersion:"623", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-100700_234b600c-6bb7-48ca-9602-3e8952de8069 became leader
	I0807 17:56:26.870778       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-100700_234b600c-6bb7-48ca-9602-3e8952de8069!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0807 17:58:10.634732   14048 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-100700 -n functional-100700
E0807 17:58:20.450917    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\client.crt: The system cannot find the path specified.
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-100700 -n functional-100700: (12.4683017s)
helpers_test.go:261: (dbg) Run:  kubectl --context functional-100700 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (35.00s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (345.73s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-100700 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-100700 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 90 (2m32.2057764s)

                                                
                                                
-- stdout --
	* [functional-100700] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4717 Build 19045.4717
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19389
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting "functional-100700" primary control-plane node in "functional-100700" cluster
	* Updating the running hyperv "functional-100700" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0807 17:58:33.172389    2092 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 07 17:52:16 functional-100700 systemd[1]: Starting Docker Application Container Engine...
	Aug 07 17:52:16 functional-100700 dockerd[674]: time="2024-08-07T17:52:16.458341985Z" level=info msg="Starting up"
	Aug 07 17:52:16 functional-100700 dockerd[674]: time="2024-08-07T17:52:16.459483937Z" level=info msg="containerd not running, starting managed containerd"
	Aug 07 17:52:16 functional-100700 dockerd[674]: time="2024-08-07T17:52:16.460719594Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=680
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.493113277Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.523238457Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.523275259Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.523339562Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.523356263Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.523427766Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.523446067Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.523804083Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.523901688Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.523925089Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.523938689Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.524034194Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.524376109Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.527352746Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.527600157Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.528068478Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.528219485Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.528416294Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.528629904Z" level=info msg="metadata content store policy set" policy=shared
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.581871643Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.581973248Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.582080552Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.582105454Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.582123954Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.582283562Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.582887889Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583040296Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583147301Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583169102Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583185303Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583200004Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583228805Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583248006Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583336710Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583450015Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583471716Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583486017Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583527319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583544019Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583560020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583595322Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583708227Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583728328Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583743029Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583769930Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583818932Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583858834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583890635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583921137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583935837Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583972939Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.584001540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.584017041Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.584038742Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.584125046Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.584167548Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.584182949Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.584202250Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.584215250Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.584231651Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.584243051Z" level=info msg="NRI interface is disabled by configuration."
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.584478362Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.584610368Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.584865080Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.585009287Z" level=info msg="containerd successfully booted in 0.093145s"
	Aug 07 17:52:17 functional-100700 dockerd[674]: time="2024-08-07T17:52:17.539100050Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 07 17:52:17 functional-100700 dockerd[674]: time="2024-08-07T17:52:17.582623719Z" level=info msg="Loading containers: start."
	Aug 07 17:52:17 functional-100700 dockerd[674]: time="2024-08-07T17:52:17.757771440Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 07 17:52:17 functional-100700 dockerd[674]: time="2024-08-07T17:52:17.984107768Z" level=info msg="Loading containers: done."
	Aug 07 17:52:18 functional-100700 dockerd[674]: time="2024-08-07T17:52:18.005649861Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Aug 07 17:52:18 functional-100700 dockerd[674]: time="2024-08-07T17:52:18.006524396Z" level=info msg="Daemon has completed initialization"
	Aug 07 17:52:18 functional-100700 dockerd[674]: time="2024-08-07T17:52:18.114597281Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 07 17:52:18 functional-100700 dockerd[674]: time="2024-08-07T17:52:18.114734986Z" level=info msg="API listen on [::]:2376"
	Aug 07 17:52:18 functional-100700 systemd[1]: Started Docker Application Container Engine.
	Aug 07 17:52:49 functional-100700 systemd[1]: Stopping Docker Application Container Engine...
	Aug 07 17:52:49 functional-100700 dockerd[674]: time="2024-08-07T17:52:49.863488345Z" level=info msg="Processing signal 'terminated'"
	Aug 07 17:52:49 functional-100700 dockerd[674]: time="2024-08-07T17:52:49.866317260Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 07 17:52:49 functional-100700 dockerd[674]: time="2024-08-07T17:52:49.866749062Z" level=info msg="Daemon shutdown complete"
	Aug 07 17:52:49 functional-100700 dockerd[674]: time="2024-08-07T17:52:49.867048363Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 07 17:52:49 functional-100700 dockerd[674]: time="2024-08-07T17:52:49.867142864Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 07 17:52:50 functional-100700 systemd[1]: docker.service: Deactivated successfully.
	Aug 07 17:52:50 functional-100700 systemd[1]: Stopped Docker Application Container Engine.
	Aug 07 17:52:50 functional-100700 systemd[1]: Starting Docker Application Container Engine...
	Aug 07 17:52:50 functional-100700 dockerd[1088]: time="2024-08-07T17:52:50.924908641Z" level=info msg="Starting up"
	Aug 07 17:52:50 functional-100700 dockerd[1088]: time="2024-08-07T17:52:50.926025447Z" level=info msg="containerd not running, starting managed containerd"
	Aug 07 17:52:50 functional-100700 dockerd[1088]: time="2024-08-07T17:52:50.927064452Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1094
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.958194110Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.986326653Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.986364954Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.986401554Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.986436154Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.986479654Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.986495054Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.986720855Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.986821556Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.986844356Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.986855856Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.986880556Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.987134958Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.990330074Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.990438474Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.990847676Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.990948477Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.991014577Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.991067378Z" level=info msg="metadata content store policy set" policy=shared
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.991319879Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.991378979Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.991397879Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.991412779Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.991428979Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.991496580Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.992185983Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.992409884Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.992647286Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.992672686Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.992688386Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.992794286Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993149888Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993243389Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993271489Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993292489Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993307089Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993318789Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993338389Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993353889Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993377989Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993393189Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993409289Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993422089Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993433690Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993445490Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993457890Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993471990Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993490190Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993561890Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993582490Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993597890Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993619090Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993632991Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993644091Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993764891Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993878492Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993897692Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993910692Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.994016892Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.994112293Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.994155593Z" level=info msg="NRI interface is disabled by configuration."
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.994503995Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.994761996Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.994864197Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.995059098Z" level=info msg="containerd successfully booted in 0.037887s"
	Aug 07 17:52:51 functional-100700 dockerd[1088]: time="2024-08-07T17:52:51.976962789Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 07 17:52:52 functional-100700 dockerd[1088]: time="2024-08-07T17:52:52.014319979Z" level=info msg="Loading containers: start."
	Aug 07 17:52:52 functional-100700 dockerd[1088]: time="2024-08-07T17:52:52.153625988Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 07 17:52:52 functional-100700 dockerd[1088]: time="2024-08-07T17:52:52.280398732Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 07 17:52:52 functional-100700 dockerd[1088]: time="2024-08-07T17:52:52.383509956Z" level=info msg="Loading containers: done."
	Aug 07 17:52:52 functional-100700 dockerd[1088]: time="2024-08-07T17:52:52.407173376Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Aug 07 17:52:52 functional-100700 dockerd[1088]: time="2024-08-07T17:52:52.407304177Z" level=info msg="Daemon has completed initialization"
	Aug 07 17:52:52 functional-100700 dockerd[1088]: time="2024-08-07T17:52:52.460329447Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 07 17:52:52 functional-100700 dockerd[1088]: time="2024-08-07T17:52:52.460475947Z" level=info msg="API listen on [::]:2376"
	Aug 07 17:52:52 functional-100700 systemd[1]: Started Docker Application Container Engine.
	Aug 07 17:53:01 functional-100700 dockerd[1088]: time="2024-08-07T17:53:01.251394538Z" level=info msg="Processing signal 'terminated'"
	Aug 07 17:53:01 functional-100700 dockerd[1088]: time="2024-08-07T17:53:01.254204052Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 07 17:53:01 functional-100700 systemd[1]: Stopping Docker Application Container Engine...
	Aug 07 17:53:01 functional-100700 dockerd[1088]: time="2024-08-07T17:53:01.260155282Z" level=info msg="Daemon shutdown complete"
	Aug 07 17:53:01 functional-100700 dockerd[1088]: time="2024-08-07T17:53:01.260456984Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 07 17:53:01 functional-100700 dockerd[1088]: time="2024-08-07T17:53:01.260937686Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 07 17:53:02 functional-100700 systemd[1]: docker.service: Deactivated successfully.
	Aug 07 17:53:02 functional-100700 systemd[1]: Stopped Docker Application Container Engine.
	Aug 07 17:53:02 functional-100700 systemd[1]: Starting Docker Application Container Engine...
	Aug 07 17:53:02 functional-100700 dockerd[1438]: time="2024-08-07T17:53:02.321797079Z" level=info msg="Starting up"
	Aug 07 17:53:02 functional-100700 dockerd[1438]: time="2024-08-07T17:53:02.323692689Z" level=info msg="containerd not running, starting managed containerd"
	Aug 07 17:53:02 functional-100700 dockerd[1438]: time="2024-08-07T17:53:02.324967095Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1444
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.356801457Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.391549934Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.391620134Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.391684835Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.391704135Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.391750835Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.391768635Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.391946636Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.392117037Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.392142737Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.392156437Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.392185437Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.392310838Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.395280253Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.395439954Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.395604655Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.395701255Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.395733555Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.396053557Z" level=info msg="metadata content store policy set" policy=shared
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.396602860Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.396804261Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.396886161Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.396963461Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397040262Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397105662Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397382264Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397664265Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397760665Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397781966Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397796766Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397810066Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397829466Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397849466Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397864866Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397877866Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397890366Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397902266Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397930866Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398086167Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398131667Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398146767Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398159868Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398173168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398186368Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398199368Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398230968Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398291368Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398305068Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398318768Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398347068Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398379069Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398633370Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398774671Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398795271Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398837271Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.399058072Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.399114872Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.399134672Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.399145573Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.399188373Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.399202473Z" level=info msg="NRI interface is disabled by configuration."
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.399579475Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.399779376Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.399959877Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.400151978Z" level=info msg="containerd successfully booted in 0.045445s"
	Aug 07 17:53:03 functional-100700 dockerd[1438]: time="2024-08-07T17:53:03.371421015Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 07 17:53:06 functional-100700 dockerd[1438]: time="2024-08-07T17:53:06.638419724Z" level=info msg="Loading containers: start."
	Aug 07 17:53:06 functional-100700 dockerd[1438]: time="2024-08-07T17:53:06.762102252Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 07 17:53:06 functional-100700 dockerd[1438]: time="2024-08-07T17:53:06.878637045Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 07 17:53:06 functional-100700 dockerd[1438]: time="2024-08-07T17:53:06.979158756Z" level=info msg="Loading containers: done."
	Aug 07 17:53:07 functional-100700 dockerd[1438]: time="2024-08-07T17:53:07.006779696Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Aug 07 17:53:07 functional-100700 dockerd[1438]: time="2024-08-07T17:53:07.006939697Z" level=info msg="Daemon has completed initialization"
	Aug 07 17:53:07 functional-100700 dockerd[1438]: time="2024-08-07T17:53:07.050899521Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 07 17:53:07 functional-100700 dockerd[1438]: time="2024-08-07T17:53:07.051782725Z" level=info msg="API listen on [::]:2376"
	Aug 07 17:53:07 functional-100700 systemd[1]: Started Docker Application Container Engine.
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.457114123Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.457756947Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.457814849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.458624579Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.566929844Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.567055349Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.567075749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.567173853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.642620715Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.643094533Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.643146835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.647326788Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.647829406Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.647978912Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.648649436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.651222930Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.841899112Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.842260225Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.842357529Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.842730042Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.987249334Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.987542945Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.987581346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.987882057Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:17 functional-100700 dockerd[1444]: time="2024-08-07T17:53:17.071713287Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:17 functional-100700 dockerd[1444]: time="2024-08-07T17:53:17.072501915Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:17 functional-100700 dockerd[1444]: time="2024-08-07T17:53:17.072657920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:17 functional-100700 dockerd[1444]: time="2024-08-07T17:53:17.072778524Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:17 functional-100700 dockerd[1444]: time="2024-08-07T17:53:17.120557979Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:17 functional-100700 dockerd[1444]: time="2024-08-07T17:53:17.120838189Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:17 functional-100700 dockerd[1444]: time="2024-08-07T17:53:17.121035196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:17 functional-100700 dockerd[1444]: time="2024-08-07T17:53:17.121432210Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:39 functional-100700 dockerd[1444]: time="2024-08-07T17:53:39.836342825Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:39 functional-100700 dockerd[1444]: time="2024-08-07T17:53:39.836494127Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:39 functional-100700 dockerd[1444]: time="2024-08-07T17:53:39.836527028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:39 functional-100700 dockerd[1444]: time="2024-08-07T17:53:39.836919632Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:40 functional-100700 dockerd[1444]: time="2024-08-07T17:53:40.031505099Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:40 functional-100700 dockerd[1444]: time="2024-08-07T17:53:40.031971604Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:40 functional-100700 dockerd[1444]: time="2024-08-07T17:53:40.032036705Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:40 functional-100700 dockerd[1444]: time="2024-08-07T17:53:40.032230607Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:40 functional-100700 dockerd[1444]: time="2024-08-07T17:53:40.071740773Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:40 functional-100700 dockerd[1444]: time="2024-08-07T17:53:40.071807874Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:40 functional-100700 dockerd[1444]: time="2024-08-07T17:53:40.071821974Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:40 functional-100700 dockerd[1444]: time="2024-08-07T17:53:40.072043276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:40 functional-100700 dockerd[1444]: time="2024-08-07T17:53:40.388937110Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:40 functional-100700 dockerd[1444]: time="2024-08-07T17:53:40.389400016Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:40 functional-100700 dockerd[1444]: time="2024-08-07T17:53:40.389566918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:40 functional-100700 dockerd[1444]: time="2024-08-07T17:53:40.390025223Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:41 functional-100700 dockerd[1444]: time="2024-08-07T17:53:41.011330360Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:41 functional-100700 dockerd[1444]: time="2024-08-07T17:53:41.011407583Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:41 functional-100700 dockerd[1444]: time="2024-08-07T17:53:41.013870604Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:41 functional-100700 dockerd[1444]: time="2024-08-07T17:53:41.017327916Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:41 functional-100700 dockerd[1444]: time="2024-08-07T17:53:41.053458090Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:41 functional-100700 dockerd[1444]: time="2024-08-07T17:53:41.053872712Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:41 functional-100700 dockerd[1444]: time="2024-08-07T17:53:41.054119484Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:41 functional-100700 dockerd[1444]: time="2024-08-07T17:53:41.054800983Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:48 functional-100700 dockerd[1444]: time="2024-08-07T17:53:48.064470635Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:48 functional-100700 dockerd[1444]: time="2024-08-07T17:53:48.064566761Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:48 functional-100700 dockerd[1444]: time="2024-08-07T17:53:48.064584666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:48 functional-100700 dockerd[1444]: time="2024-08-07T17:53:48.064692294Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:48 functional-100700 dockerd[1444]: time="2024-08-07T17:53:48.363127500Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:48 functional-100700 dockerd[1444]: time="2024-08-07T17:53:48.363436282Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:48 functional-100700 dockerd[1444]: time="2024-08-07T17:53:48.363741263Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:48 functional-100700 dockerd[1444]: time="2024-08-07T17:53:48.364157974Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:51 functional-100700 dockerd[1438]: time="2024-08-07T17:53:51.247657321Z" level=info msg="ignoring event" container=9d5becd51e7186ec4acc242dcdf88a4327987ec048e0ee52430eac221c2fc0fe module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:53:51 functional-100700 dockerd[1444]: time="2024-08-07T17:53:51.250057633Z" level=info msg="shim disconnected" id=9d5becd51e7186ec4acc242dcdf88a4327987ec048e0ee52430eac221c2fc0fe namespace=moby
	Aug 07 17:53:51 functional-100700 dockerd[1444]: time="2024-08-07T17:53:51.250177263Z" level=warning msg="cleaning up after shim disconnected" id=9d5becd51e7186ec4acc242dcdf88a4327987ec048e0ee52430eac221c2fc0fe namespace=moby
	Aug 07 17:53:51 functional-100700 dockerd[1444]: time="2024-08-07T17:53:51.250194468Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:53:51 functional-100700 dockerd[1438]: time="2024-08-07T17:53:51.423182591Z" level=info msg="ignoring event" container=2a2add9d4c7dacc6804bc6f6e21cf8821ae80a5ef75964e66784019654fe3bc3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:53:51 functional-100700 dockerd[1444]: time="2024-08-07T17:53:51.423885070Z" level=info msg="shim disconnected" id=2a2add9d4c7dacc6804bc6f6e21cf8821ae80a5ef75964e66784019654fe3bc3 namespace=moby
	Aug 07 17:53:51 functional-100700 dockerd[1444]: time="2024-08-07T17:53:51.424164141Z" level=warning msg="cleaning up after shim disconnected" id=2a2add9d4c7dacc6804bc6f6e21cf8821ae80a5ef75964e66784019654fe3bc3 namespace=moby
	Aug 07 17:53:51 functional-100700 dockerd[1444]: time="2024-08-07T17:53:51.424226557Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:42 functional-100700 systemd[1]: Stopping Docker Application Container Engine...
	Aug 07 17:55:42 functional-100700 dockerd[1438]: time="2024-08-07T17:55:42.811970533Z" level=info msg="Processing signal 'terminated'"
	Aug 07 17:55:43 functional-100700 dockerd[1438]: time="2024-08-07T17:55:43.080390772Z" level=info msg="ignoring event" container=f87ac0281bc2649782f39abdff8151e0c016bf26b3182a3df5c8579e21fe65d7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.081622815Z" level=info msg="shim disconnected" id=f87ac0281bc2649782f39abdff8151e0c016bf26b3182a3df5c8579e21fe65d7 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.082180634Z" level=warning msg="cleaning up after shim disconnected" id=f87ac0281bc2649782f39abdff8151e0c016bf26b3182a3df5c8579e21fe65d7 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.082393841Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1438]: time="2024-08-07T17:55:43.098416193Z" level=info msg="ignoring event" container=b9283200bae35965a0ee1c5549a6004ce0c7d70f70b9a374e41cd065d4d5dec3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.099709637Z" level=info msg="shim disconnected" id=b9283200bae35965a0ee1c5549a6004ce0c7d70f70b9a374e41cd065d4d5dec3 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.099819341Z" level=warning msg="cleaning up after shim disconnected" id=b9283200bae35965a0ee1c5549a6004ce0c7d70f70b9a374e41cd065d4d5dec3 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.099888243Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.121832799Z" level=info msg="shim disconnected" id=03079679d68cc19538e02f39c48251b50393f7dd981e609af3dcc96c420cd29e namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.121978304Z" level=warning msg="cleaning up after shim disconnected" id=03079679d68cc19538e02f39c48251b50393f7dd981e609af3dcc96c420cd29e namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.122200511Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.122413919Z" level=info msg="shim disconnected" id=1ca5873cb027b5d4205ca4cdff1109b4f47c0b8c5f18ae986a0b932c8d9584f4 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.122477321Z" level=warning msg="cleaning up after shim disconnected" id=1ca5873cb027b5d4205ca4cdff1109b4f47c0b8c5f18ae986a0b932c8d9584f4 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.122491521Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1438]: time="2024-08-07T17:55:43.122750830Z" level=info msg="ignoring event" container=1ca5873cb027b5d4205ca4cdff1109b4f47c0b8c5f18ae986a0b932c8d9584f4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:55:43 functional-100700 dockerd[1438]: time="2024-08-07T17:55:43.122822433Z" level=info msg="ignoring event" container=03079679d68cc19538e02f39c48251b50393f7dd981e609af3dcc96c420cd29e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.132577069Z" level=info msg="shim disconnected" id=9f7b90986285c55035e0f34c86f5eaccab75a819c487d2fb0ea04853921c13bf namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.132832477Z" level=warning msg="cleaning up after shim disconnected" id=9f7b90986285c55035e0f34c86f5eaccab75a819c487d2fb0ea04853921c13bf namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.132974882Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.138866285Z" level=info msg="shim disconnected" id=f907706c00ebe4149920447890732b438dd707eb66023282c7b66ac64e2185b5 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.139038791Z" level=warning msg="cleaning up after shim disconnected" id=f907706c00ebe4149920447890732b438dd707eb66023282c7b66ac64e2185b5 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.139187396Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.155113444Z" level=info msg="shim disconnected" id=a334b1535e2f7b257588b80636584105387ba212b623a89972b0a362d62c0504 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.155224248Z" level=warning msg="cleaning up after shim disconnected" id=a334b1535e2f7b257588b80636584105387ba212b623a89972b0a362d62c0504 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.155238049Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1438]: time="2024-08-07T17:55:43.155389654Z" level=info msg="ignoring event" container=f907706c00ebe4149920447890732b438dd707eb66023282c7b66ac64e2185b5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:55:43 functional-100700 dockerd[1438]: time="2024-08-07T17:55:43.155569360Z" level=info msg="ignoring event" container=9f7b90986285c55035e0f34c86f5eaccab75a819c487d2fb0ea04853921c13bf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:55:43 functional-100700 dockerd[1438]: time="2024-08-07T17:55:43.155886271Z" level=info msg="ignoring event" container=a334b1535e2f7b257588b80636584105387ba212b623a89972b0a362d62c0504 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.169713847Z" level=info msg="shim disconnected" id=32e3bea2a931545b3b3164d713e35af3f8439f208d9a2909552dc971a61ca84a namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.170314567Z" level=warning msg="cleaning up after shim disconnected" id=32e3bea2a931545b3b3164d713e35af3f8439f208d9a2909552dc971a61ca84a namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.170672780Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1438]: time="2024-08-07T17:55:43.183709228Z" level=info msg="ignoring event" container=32e3bea2a931545b3b3164d713e35af3f8439f208d9a2909552dc971a61ca84a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:55:43 functional-100700 dockerd[1438]: time="2024-08-07T17:55:43.186931639Z" level=info msg="ignoring event" container=8e6d65d222ddafeb197a6bdeeb312ce4cb757e9ed4da126acc744cc3b9f1550e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.187130746Z" level=info msg="shim disconnected" id=8e6d65d222ddafeb197a6bdeeb312ce4cb757e9ed4da126acc744cc3b9f1550e namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.187455057Z" level=warning msg="cleaning up after shim disconnected" id=8e6d65d222ddafeb197a6bdeeb312ce4cb757e9ed4da126acc744cc3b9f1550e namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.187626563Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.203552411Z" level=info msg="shim disconnected" id=6f09e3713754243813d9c0717cdb77e8bedfc4c511d31490f7ba369a8d1bcc06 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.204052129Z" level=warning msg="cleaning up after shim disconnected" id=6f09e3713754243813d9c0717cdb77e8bedfc4c511d31490f7ba369a8d1bcc06 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.204312938Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1438]: time="2024-08-07T17:55:43.210829062Z" level=info msg="ignoring event" container=88ef6e03a7d49636d4ec885944dcc92b4c233c0740da2e8f511ce9078e817eb9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:55:43 functional-100700 dockerd[1438]: time="2024-08-07T17:55:43.210944966Z" level=info msg="ignoring event" container=6f09e3713754243813d9c0717cdb77e8bedfc4c511d31490f7ba369a8d1bcc06 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.210804861Z" level=info msg="shim disconnected" id=88ef6e03a7d49636d4ec885944dcc92b4c233c0740da2e8f511ce9078e817eb9 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.211582688Z" level=warning msg="cleaning up after shim disconnected" id=88ef6e03a7d49636d4ec885944dcc92b4c233c0740da2e8f511ce9078e817eb9 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.211710392Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.243702093Z" level=info msg="shim disconnected" id=4f7e1db775dc208f8832a04d1b684f4ab8f803d4f68869549effeeb024f11b10 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1438]: time="2024-08-07T17:55:43.244412118Z" level=info msg="ignoring event" container=4f7e1db775dc208f8832a04d1b684f4ab8f803d4f68869549effeeb024f11b10 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.248240650Z" level=warning msg="cleaning up after shim disconnected" id=4f7e1db775dc208f8832a04d1b684f4ab8f803d4f68869549effeeb024f11b10 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.248409455Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:47 functional-100700 dockerd[1438]: time="2024-08-07T17:55:47.969145341Z" level=info msg="ignoring event" container=8257548df8d0db4a3326d585a67cf2b31ec7c8b63a2fb41d0009d9c5fa124c3b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:55:47 functional-100700 dockerd[1444]: time="2024-08-07T17:55:47.970761397Z" level=info msg="shim disconnected" id=8257548df8d0db4a3326d585a67cf2b31ec7c8b63a2fb41d0009d9c5fa124c3b namespace=moby
	Aug 07 17:55:47 functional-100700 dockerd[1444]: time="2024-08-07T17:55:47.971219213Z" level=warning msg="cleaning up after shim disconnected" id=8257548df8d0db4a3326d585a67cf2b31ec7c8b63a2fb41d0009d9c5fa124c3b namespace=moby
	Aug 07 17:55:47 functional-100700 dockerd[1444]: time="2024-08-07T17:55:47.973093477Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:52 functional-100700 dockerd[1438]: time="2024-08-07T17:55:52.961783600Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=76120dfe1c32eb89c2bb01fc16ebaad087df9e517d03ed5169fb50db579cfe65
	Aug 07 17:55:53 functional-100700 dockerd[1444]: time="2024-08-07T17:55:53.004669561Z" level=info msg="shim disconnected" id=76120dfe1c32eb89c2bb01fc16ebaad087df9e517d03ed5169fb50db579cfe65 namespace=moby
	Aug 07 17:55:53 functional-100700 dockerd[1444]: time="2024-08-07T17:55:53.004855260Z" level=warning msg="cleaning up after shim disconnected" id=76120dfe1c32eb89c2bb01fc16ebaad087df9e517d03ed5169fb50db579cfe65 namespace=moby
	Aug 07 17:55:53 functional-100700 dockerd[1444]: time="2024-08-07T17:55:53.004935360Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:53 functional-100700 dockerd[1438]: time="2024-08-07T17:55:53.005290559Z" level=info msg="ignoring event" container=76120dfe1c32eb89c2bb01fc16ebaad087df9e517d03ed5169fb50db579cfe65 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:55:53 functional-100700 dockerd[1438]: time="2024-08-07T17:55:53.082366606Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 07 17:55:53 functional-100700 dockerd[1438]: time="2024-08-07T17:55:53.082484305Z" level=info msg="Daemon shutdown complete"
	Aug 07 17:55:53 functional-100700 dockerd[1438]: time="2024-08-07T17:55:53.082557905Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 07 17:55:53 functional-100700 dockerd[1438]: time="2024-08-07T17:55:53.082900804Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 07 17:55:54 functional-100700 systemd[1]: docker.service: Deactivated successfully.
	Aug 07 17:55:54 functional-100700 systemd[1]: Stopped Docker Application Container Engine.
	Aug 07 17:55:54 functional-100700 systemd[1]: docker.service: Consumed 6.104s CPU time.
	Aug 07 17:55:54 functional-100700 systemd[1]: Starting Docker Application Container Engine...
	Aug 07 17:55:54 functional-100700 dockerd[4430]: time="2024-08-07T17:55:54.151963549Z" level=info msg="Starting up"
	Aug 07 17:55:54 functional-100700 dockerd[4430]: time="2024-08-07T17:55:54.153307848Z" level=info msg="containerd not running, starting managed containerd"
	Aug 07 17:55:54 functional-100700 dockerd[4430]: time="2024-08-07T17:55:54.154476946Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=4437
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.189116214Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.217487588Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.217672488Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.217724888Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.217743588Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.217923788Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.218099587Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.218341687Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.218462487Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.218487087Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.218500687Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.218536087Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.218676087Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.221803184Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.221937684Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.222170584Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.222318784Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.222351783Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.222377583Z" level=info msg="metadata content store policy set" policy=shared
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.222707583Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.222769383Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.222791383Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.222824883Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.222843583Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.223037183Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.223447282Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.223700782Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224128482Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224153982Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224211182Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224225882Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224249282Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224283382Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224302582Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224321982Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224336982Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224348882Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224375782Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224415382Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224431982Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224446182Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224464282Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224487782Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224502981Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224516181Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224556381Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224572481Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224585181Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224597481Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224611681Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224628781Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224653481Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224668081Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224681281Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.225080281Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.225134081Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.225150081Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.225165281Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.225176081Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.225190081Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.225200981Z" level=info msg="NRI interface is disabled by configuration."
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.225570080Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.225664580Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.225760880Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.225784780Z" level=info msg="containerd successfully booted in 0.038263s"
	Aug 07 17:55:55 functional-100700 dockerd[4430]: time="2024-08-07T17:55:55.203623721Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 07 17:55:55 functional-100700 dockerd[4430]: time="2024-08-07T17:55:55.249075279Z" level=info msg="Loading containers: start."
	Aug 07 17:55:55 functional-100700 dockerd[4430]: time="2024-08-07T17:55:55.486283283Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 07 17:55:55 functional-100700 dockerd[4430]: time="2024-08-07T17:55:55.611181043Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 07 17:55:55 functional-100700 dockerd[4430]: time="2024-08-07T17:55:55.728226393Z" level=info msg="Loading containers: done."
	Aug 07 17:55:55 functional-100700 dockerd[4430]: time="2024-08-07T17:55:55.754038026Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Aug 07 17:55:55 functional-100700 dockerd[4430]: time="2024-08-07T17:55:55.754150926Z" level=info msg="Daemon has completed initialization"
	Aug 07 17:55:55 functional-100700 dockerd[4430]: time="2024-08-07T17:55:55.805054292Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 07 17:55:55 functional-100700 systemd[1]: Started Docker Application Container Engine.
	Aug 07 17:55:55 functional-100700 dockerd[4430]: time="2024-08-07T17:55:55.817756408Z" level=info msg="API listen on [::]:2376"
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.526494067Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.526577168Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.526591169Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.526693470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.558712297Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.563728963Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.563952066Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.564587075Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.615923059Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.616533767Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.616742570Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.616989273Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.649839111Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.650906025Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.651080528Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.651280130Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.002162008Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.002309810Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.002327411Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.002784217Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.146319020Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.146402021Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.146419521Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.146546323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.186224804Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.186289605Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.186312905Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.186429907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.293246071Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.293400074Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.293416674Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.293513875Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:08 functional-100700 dockerd[4437]: time="2024-08-07T17:56:08.342920003Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:08 functional-100700 dockerd[4437]: time="2024-08-07T17:56:08.345412453Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:08 functional-100700 dockerd[4437]: time="2024-08-07T17:56:08.345440953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:08 functional-100700 dockerd[4437]: time="2024-08-07T17:56:08.346309071Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:08 functional-100700 dockerd[4437]: time="2024-08-07T17:56:08.427619805Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:08 functional-100700 dockerd[4437]: time="2024-08-07T17:56:08.427935011Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:08 functional-100700 dockerd[4437]: time="2024-08-07T17:56:08.427958412Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:08 functional-100700 dockerd[4437]: time="2024-08-07T17:56:08.428175716Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:08 functional-100700 dockerd[4437]: time="2024-08-07T17:56:08.450251060Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:08 functional-100700 dockerd[4437]: time="2024-08-07T17:56:08.450326762Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:08 functional-100700 dockerd[4437]: time="2024-08-07T17:56:08.450344662Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:08 functional-100700 dockerd[4437]: time="2024-08-07T17:56:08.450438364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:09 functional-100700 dockerd[4437]: time="2024-08-07T17:56:09.021378960Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:09 functional-100700 dockerd[4437]: time="2024-08-07T17:56:09.021447242Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:09 functional-100700 dockerd[4437]: time="2024-08-07T17:56:09.021467036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:09 functional-100700 dockerd[4437]: time="2024-08-07T17:56:09.021664985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:09 functional-100700 dockerd[4437]: time="2024-08-07T17:56:09.032269201Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:09 functional-100700 dockerd[4437]: time="2024-08-07T17:56:09.032481345Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:09 functional-100700 dockerd[4437]: time="2024-08-07T17:56:09.033742514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:09 functional-100700 dockerd[4437]: time="2024-08-07T17:56:09.034300967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:09 functional-100700 dockerd[4437]: time="2024-08-07T17:56:09.230710505Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:09 functional-100700 dockerd[4437]: time="2024-08-07T17:56:09.231303050Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:09 functional-100700 dockerd[4437]: time="2024-08-07T17:56:09.231404523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:09 functional-100700 dockerd[4437]: time="2024-08-07T17:56:09.231887696Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:59:53 functional-100700 systemd[1]: Stopping Docker Application Container Engine...
	Aug 07 17:59:53 functional-100700 dockerd[4430]: time="2024-08-07T17:59:53.701240101Z" level=info msg="Processing signal 'terminated'"
	Aug 07 17:59:53 functional-100700 dockerd[4430]: time="2024-08-07T17:59:53.891480424Z" level=info msg="ignoring event" container=011ec8239aa1c51c9f4776f7bfc897d8a3292c8e36986f970b1e2c2ca9da86fd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:59:53 functional-100700 dockerd[4430]: time="2024-08-07T17:59:53.891549927Z" level=info msg="ignoring event" container=b39dd511076447f842f8327dca684bf0a48d714c0f66e22029b88d889e068b15 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.892313355Z" level=info msg="shim disconnected" id=b39dd511076447f842f8327dca684bf0a48d714c0f66e22029b88d889e068b15 namespace=moby
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.892402158Z" level=warning msg="cleaning up after shim disconnected" id=b39dd511076447f842f8327dca684bf0a48d714c0f66e22029b88d889e068b15 namespace=moby
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.892417259Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.892816273Z" level=info msg="shim disconnected" id=011ec8239aa1c51c9f4776f7bfc897d8a3292c8e36986f970b1e2c2ca9da86fd namespace=moby
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.893079383Z" level=warning msg="cleaning up after shim disconnected" id=011ec8239aa1c51c9f4776f7bfc897d8a3292c8e36986f970b1e2c2ca9da86fd namespace=moby
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.893229989Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:59:53 functional-100700 dockerd[4430]: time="2024-08-07T17:59:53.943367240Z" level=info msg="ignoring event" container=333ea0f6bde6b1ef97dfdfac80a9d448a8898827c7df556176cd4ea6cc523a33 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.943759054Z" level=info msg="shim disconnected" id=333ea0f6bde6b1ef97dfdfac80a9d448a8898827c7df556176cd4ea6cc523a33 namespace=moby
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.943822956Z" level=warning msg="cleaning up after shim disconnected" id=333ea0f6bde6b1ef97dfdfac80a9d448a8898827c7df556176cd4ea6cc523a33 namespace=moby
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.943835757Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.963273574Z" level=info msg="shim disconnected" id=ee73743ff4167ab913696e97e9a77ab9a2b23dba89c1348473bb28a7089d45c2 namespace=moby
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.963426780Z" level=warning msg="cleaning up after shim disconnected" id=ee73743ff4167ab913696e97e9a77ab9a2b23dba89c1348473bb28a7089d45c2 namespace=moby
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.963795094Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:59:53 functional-100700 dockerd[4430]: time="2024-08-07T17:59:53.980683817Z" level=info msg="ignoring event" container=ee73743ff4167ab913696e97e9a77ab9a2b23dba89c1348473bb28a7089d45c2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:59:53 functional-100700 dockerd[4430]: time="2024-08-07T17:59:53.981327041Z" level=info msg="ignoring event" container=d57a72e940a3aff58bc3f9442b486233311d0f1b0b85603bed6541d03b52640b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:59:53 functional-100700 dockerd[4430]: time="2024-08-07T17:59:53.981517248Z" level=info msg="ignoring event" container=ff33a3021f789dbad1bd68bc673700c1dc540ec4a83d74c98c4915f4c66932e7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.983088406Z" level=info msg="shim disconnected" id=d57a72e940a3aff58bc3f9442b486233311d0f1b0b85603bed6541d03b52640b namespace=moby
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.983163809Z" level=warning msg="cleaning up after shim disconnected" id=d57a72e940a3aff58bc3f9442b486233311d0f1b0b85603bed6541d03b52640b namespace=moby
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.983176709Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4430]: time="2024-08-07T17:59:54.002058106Z" level=info msg="ignoring event" container=60d38309b3f444639ea52ae1b7c219816eaed13bbb7eb6ef22b5ecb5ee8fc804 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.002564025Z" level=info msg="shim disconnected" id=60d38309b3f444639ea52ae1b7c219816eaed13bbb7eb6ef22b5ecb5ee8fc804 namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.003062843Z" level=warning msg="cleaning up after shim disconnected" id=60d38309b3f444639ea52ae1b7c219816eaed13bbb7eb6ef22b5ecb5ee8fc804 namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.009295273Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.013796640Z" level=info msg="shim disconnected" id=623399e23aa3ba701f8a621854544b48498da36bba773f8911042b6d2561b57c namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.019740659Z" level=warning msg="cleaning up after shim disconnected" id=623399e23aa3ba701f8a621854544b48498da36bba773f8911042b6d2561b57c namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.019785161Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.008834456Z" level=info msg="shim disconnected" id=ff33a3021f789dbad1bd68bc673700c1dc540ec4a83d74c98c4915f4c66932e7 namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.021492824Z" level=warning msg="cleaning up after shim disconnected" id=ff33a3021f789dbad1bd68bc673700c1dc540ec4a83d74c98c4915f4c66932e7 namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.021550026Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4430]: time="2024-08-07T17:59:54.031232683Z" level=info msg="ignoring event" container=a781cd4bdb89586cb36144096a96bd794ac5f7417bddb500925aa03f7a83ee76 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:59:54 functional-100700 dockerd[4430]: time="2024-08-07T17:59:54.031289685Z" level=info msg="ignoring event" container=241a3e17f71d6549f6693cc1faa13d1a82113d0126d862a38ff582ea671e7704 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:59:54 functional-100700 dockerd[4430]: time="2024-08-07T17:59:54.031323187Z" level=info msg="ignoring event" container=0a22b12bc48412adcefcda1aa5f0176acba34d5ef617a5fc6e81727d4bf2fb9f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:59:54 functional-100700 dockerd[4430]: time="2024-08-07T17:59:54.031338987Z" level=info msg="ignoring event" container=125a3650c7bb9ceb749c5379c1700ef34f0671035364a39c7f8a55a9e244527f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:59:54 functional-100700 dockerd[4430]: time="2024-08-07T17:59:54.031357688Z" level=info msg="ignoring event" container=623399e23aa3ba701f8a621854544b48498da36bba773f8911042b6d2561b57c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.016580642Z" level=info msg="shim disconnected" id=a781cd4bdb89586cb36144096a96bd794ac5f7417bddb500925aa03f7a83ee76 namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.033182755Z" level=info msg="shim disconnected" id=125a3650c7bb9ceb749c5379c1700ef34f0671035364a39c7f8a55a9e244527f namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.035991259Z" level=warning msg="cleaning up after shim disconnected" id=125a3650c7bb9ceb749c5379c1700ef34f0671035364a39c7f8a55a9e244527f namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.036075162Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.036361773Z" level=warning msg="cleaning up after shim disconnected" id=a781cd4bdb89586cb36144096a96bd794ac5f7417bddb500925aa03f7a83ee76 namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.036396774Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.015009784Z" level=info msg="shim disconnected" id=241a3e17f71d6549f6693cc1faa13d1a82113d0126d862a38ff582ea671e7704 namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.040268717Z" level=warning msg="cleaning up after shim disconnected" id=241a3e17f71d6549f6693cc1faa13d1a82113d0126d862a38ff582ea671e7704 namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.040483025Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.017838089Z" level=info msg="shim disconnected" id=0a22b12bc48412adcefcda1aa5f0176acba34d5ef617a5fc6e81727d4bf2fb9f namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.056017998Z" level=warning msg="cleaning up after shim disconnected" id=0a22b12bc48412adcefcda1aa5f0176acba34d5ef617a5fc6e81727d4bf2fb9f namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.056073300Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:59:58 functional-100700 dockerd[4430]: time="2024-08-07T17:59:58.843549639Z" level=info msg="ignoring event" container=3bc20896cf9b1f1d0380b658e8e185cf6d0bedfce8143c9b50eb86a711c71a45 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:59:58 functional-100700 dockerd[4437]: time="2024-08-07T17:59:58.844293967Z" level=info msg="shim disconnected" id=3bc20896cf9b1f1d0380b658e8e185cf6d0bedfce8143c9b50eb86a711c71a45 namespace=moby
	Aug 07 17:59:58 functional-100700 dockerd[4437]: time="2024-08-07T17:59:58.844701482Z" level=warning msg="cleaning up after shim disconnected" id=3bc20896cf9b1f1d0380b658e8e185cf6d0bedfce8143c9b50eb86a711c71a45 namespace=moby
	Aug 07 17:59:58 functional-100700 dockerd[4437]: time="2024-08-07T17:59:58.845283503Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 18:00:03 functional-100700 dockerd[4430]: time="2024-08-07T18:00:03.891107278Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=ceb9a86ed09ccf71c371818d759e450667e70979c75506a5233093a4e3efe077
	Aug 07 18:00:03 functional-100700 dockerd[4430]: time="2024-08-07T18:00:03.954477534Z" level=info msg="ignoring event" container=ceb9a86ed09ccf71c371818d759e450667e70979c75506a5233093a4e3efe077 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 18:00:03 functional-100700 dockerd[4437]: time="2024-08-07T18:00:03.957207011Z" level=info msg="shim disconnected" id=ceb9a86ed09ccf71c371818d759e450667e70979c75506a5233093a4e3efe077 namespace=moby
	Aug 07 18:00:03 functional-100700 dockerd[4437]: time="2024-08-07T18:00:03.957346010Z" level=warning msg="cleaning up after shim disconnected" id=ceb9a86ed09ccf71c371818d759e450667e70979c75506a5233093a4e3efe077 namespace=moby
	Aug 07 18:00:03 functional-100700 dockerd[4437]: time="2024-08-07T18:00:03.957506408Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 18:00:04 functional-100700 dockerd[4430]: time="2024-08-07T18:00:04.021732016Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 07 18:00:04 functional-100700 dockerd[4430]: time="2024-08-07T18:00:04.022302513Z" level=info msg="Daemon shutdown complete"
	Aug 07 18:00:04 functional-100700 dockerd[4430]: time="2024-08-07T18:00:04.022522311Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 07 18:00:04 functional-100700 dockerd[4430]: time="2024-08-07T18:00:04.022549211Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 07 18:00:05 functional-100700 systemd[1]: docker.service: Deactivated successfully.
	Aug 07 18:00:05 functional-100700 systemd[1]: Stopped Docker Application Container Engine.
	Aug 07 18:00:05 functional-100700 systemd[1]: docker.service: Consumed 8.144s CPU time.
	Aug 07 18:00:05 functional-100700 systemd[1]: Starting Docker Application Container Engine...
	Aug 07 18:00:05 functional-100700 dockerd[8031]: time="2024-08-07T18:00:05.087273572Z" level=info msg="Starting up"
	Aug 07 18:01:05 functional-100700 dockerd[8031]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 07 18:01:05 functional-100700 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 07 18:01:05 functional-100700 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 07 18:01:05 functional-100700 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:755: failed to restart minikube. args "out/minikube-windows-amd64.exe start -p functional-100700 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 90
functional_test.go:757: restart took 2m32.3336864s for "functional-100700" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-100700 -n functional-100700
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-100700 -n functional-100700: exit status 2 (12.4140603s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0807 18:01:05.524233    8772 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/ExtraConfig]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-100700 logs -n 25
E0807 18:03:20.465116    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\client.crt: The system cannot find the path specified.
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-100700 logs -n 25: (2m48.3210529s)
helpers_test.go:252: TestFunctional/serial/ExtraConfig logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| unpause | nospam-974300 --log_dir                                                  | nospam-974300     | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:48 UTC | 07 Aug 24 17:48 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-974300              |                   |                   |         |                     |                     |
	|         | unpause                                                                  |                   |                   |         |                     |                     |
	| unpause | nospam-974300 --log_dir                                                  | nospam-974300     | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:48 UTC | 07 Aug 24 17:48 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-974300              |                   |                   |         |                     |                     |
	|         | unpause                                                                  |                   |                   |         |                     |                     |
	| unpause | nospam-974300 --log_dir                                                  | nospam-974300     | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:48 UTC | 07 Aug 24 17:48 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-974300              |                   |                   |         |                     |                     |
	|         | unpause                                                                  |                   |                   |         |                     |                     |
	| stop    | nospam-974300 --log_dir                                                  | nospam-974300     | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:48 UTC | 07 Aug 24 17:49 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-974300              |                   |                   |         |                     |                     |
	|         | stop                                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-974300 --log_dir                                                  | nospam-974300     | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:49 UTC | 07 Aug 24 17:49 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-974300              |                   |                   |         |                     |                     |
	|         | stop                                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-974300 --log_dir                                                  | nospam-974300     | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:49 UTC | 07 Aug 24 17:49 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-974300              |                   |                   |         |                     |                     |
	|         | stop                                                                     |                   |                   |         |                     |                     |
	| delete  | -p nospam-974300                                                         | nospam-974300     | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:49 UTC | 07 Aug 24 17:50 UTC |
	| start   | -p functional-100700                                                     | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:50 UTC | 07 Aug 24 17:54 UTC |
	|         | --memory=4000                                                            |                   |                   |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                   |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv                                               |                   |                   |         |                     |                     |
	| start   | -p functional-100700                                                     | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:54 UTC | 07 Aug 24 17:56 UTC |
	|         | --alsologtostderr -v=8                                                   |                   |                   |         |                     |                     |
	| cache   | functional-100700 cache add                                              | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:56 UTC | 07 Aug 24 17:56 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |                   |         |                     |                     |
	| cache   | functional-100700 cache add                                              | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:56 UTC | 07 Aug 24 17:56 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |                   |         |                     |                     |
	| cache   | functional-100700 cache add                                              | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:56 UTC | 07 Aug 24 17:56 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| cache   | functional-100700 cache add                                              | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:57 UTC | 07 Aug 24 17:57 UTC |
	|         | minikube-local-cache-test:functional-100700                              |                   |                   |         |                     |                     |
	| cache   | functional-100700 cache delete                                           | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:57 UTC | 07 Aug 24 17:57 UTC |
	|         | minikube-local-cache-test:functional-100700                              |                   |                   |         |                     |                     |
	| cache   | delete                                                                   | minikube          | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:57 UTC | 07 Aug 24 17:57 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |                   |         |                     |                     |
	| cache   | list                                                                     | minikube          | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:57 UTC | 07 Aug 24 17:57 UTC |
	| ssh     | functional-100700 ssh sudo                                               | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:57 UTC | 07 Aug 24 17:57 UTC |
	|         | crictl images                                                            |                   |                   |         |                     |                     |
	| ssh     | functional-100700                                                        | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:57 UTC | 07 Aug 24 17:57 UTC |
	|         | ssh sudo docker rmi                                                      |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| ssh     | functional-100700 ssh                                                    | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:57 UTC |                     |
	|         | sudo crictl inspecti                                                     |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| cache   | functional-100700 cache reload                                           | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:57 UTC | 07 Aug 24 17:57 UTC |
	| ssh     | functional-100700 ssh                                                    | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:57 UTC | 07 Aug 24 17:57 UTC |
	|         | sudo crictl inspecti                                                     |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| cache   | delete                                                                   | minikube          | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:57 UTC | 07 Aug 24 17:57 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |                   |         |                     |                     |
	| cache   | delete                                                                   | minikube          | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:57 UTC | 07 Aug 24 17:57 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| kubectl | functional-100700 kubectl --                                             | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:57 UTC | 07 Aug 24 17:57 UTC |
	|         | --context functional-100700                                              |                   |                   |         |                     |                     |
	|         | get pods                                                                 |                   |                   |         |                     |                     |
	| start   | -p functional-100700                                                     | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:58 UTC |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                   |                   |         |                     |                     |
	|         | --wait=all                                                               |                   |                   |         |                     |                     |
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/07 17:58:33
	Running on machine: minikube6
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0807 17:58:33.249534    2092 out.go:291] Setting OutFile to fd 728 ...
	I0807 17:58:33.250111    2092 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 17:58:33.250111    2092 out.go:304] Setting ErrFile to fd 800...
	I0807 17:58:33.250179    2092 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 17:58:33.269540    2092 out.go:298] Setting JSON to false
	I0807 17:58:33.272574    2092 start.go:129] hostinfo: {"hostname":"minikube6","uptime":315442,"bootTime":1722738070,"procs":188,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4717 Build 19045.4717","kernelVersion":"10.0.19045.4717 Build 19045.4717","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0807 17:58:33.272574    2092 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0807 17:58:33.277577    2092 out.go:177] * [functional-100700] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4717 Build 19045.4717
	I0807 17:58:33.281028    2092 notify.go:220] Checking for updates...
	I0807 17:58:33.284043    2092 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0807 17:58:33.286477    2092 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0807 17:58:33.289594    2092 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0807 17:58:33.292627    2092 out.go:177]   - MINIKUBE_LOCATION=19389
	I0807 17:58:33.295302    2092 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0807 17:58:33.298825    2092 config.go:182] Loaded profile config "functional-100700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 17:58:33.298825    2092 driver.go:392] Setting default libvirt URI to qemu:///system
	I0807 17:58:38.714558    2092 out.go:177] * Using the hyperv driver based on existing profile
	I0807 17:58:38.718761    2092 start.go:297] selected driver: hyperv
	I0807 17:58:38.718761    2092 start.go:901] validating driver "hyperv" against &{Name:functional-100700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.3 ClusterName:functional-100700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.235.211 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 17:58:38.718761    2092 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0807 17:58:38.771046    2092 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0807 17:58:38.771115    2092 cni.go:84] Creating CNI manager for ""
	I0807 17:58:38.771115    2092 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0807 17:58:38.771254    2092 start.go:340] cluster config:
	{Name:functional-100700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-100700 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.235.211 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 17:58:38.771533    2092 iso.go:125] acquiring lock: {Name:mk51465eaa337f49a286b30986b5f3d5f63e6787 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 17:58:38.776902    2092 out.go:177] * Starting "functional-100700" primary control-plane node in "functional-100700" cluster
	I0807 17:58:38.780865    2092 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0807 17:58:38.780865    2092 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0807 17:58:38.780865    2092 cache.go:56] Caching tarball of preloaded images
	I0807 17:58:38.780865    2092 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0807 17:58:38.780865    2092 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0807 17:58:38.781934    2092 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-100700\config.json ...
	I0807 17:58:38.783866    2092 start.go:360] acquireMachinesLock for functional-100700: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 17:58:38.783866    2092 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-100700"
	I0807 17:58:38.783866    2092 start.go:96] Skipping create...Using existing machine configuration
	I0807 17:58:38.783866    2092 fix.go:54] fixHost starting: 
	I0807 17:58:38.784885    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:58:41.603969    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:58:41.604807    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:58:41.604807    2092 fix.go:112] recreateIfNeeded on functional-100700: state=Running err=<nil>
	W0807 17:58:41.604807    2092 fix.go:138] unexpected machine state, will restart: <nil>
	I0807 17:58:41.608533    2092 out.go:177] * Updating the running hyperv "functional-100700" VM ...
	I0807 17:58:41.613016    2092 machine.go:94] provisionDockerMachine start ...
	I0807 17:58:41.613016    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:58:43.839989    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:58:43.839989    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:58:43.840252    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:58:46.452954    2092 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:58:46.452954    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:58:46.459781    2092 main.go:141] libmachine: Using SSH client type: native
	I0807 17:58:46.460474    2092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.235.211 22 <nil> <nil>}
	I0807 17:58:46.460474    2092 main.go:141] libmachine: About to run SSH command:
	hostname
	I0807 17:58:46.591805    2092 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-100700
	
	I0807 17:58:46.591805    2092 buildroot.go:166] provisioning hostname "functional-100700"
	I0807 17:58:46.591805    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:58:48.755211    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:58:48.755427    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:58:48.755465    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:58:51.336039    2092 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:58:51.336039    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:58:51.342623    2092 main.go:141] libmachine: Using SSH client type: native
	I0807 17:58:51.342623    2092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.235.211 22 <nil> <nil>}
	I0807 17:58:51.342623    2092 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-100700 && echo "functional-100700" | sudo tee /etc/hostname
	I0807 17:58:51.496578    2092 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-100700
	
	I0807 17:58:51.496578    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:58:53.691439    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:58:53.691439    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:58:53.691439    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:58:56.301964    2092 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:58:56.301964    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:58:56.307512    2092 main.go:141] libmachine: Using SSH client type: native
	I0807 17:58:56.308194    2092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.235.211 22 <nil> <nil>}
	I0807 17:58:56.308194    2092 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-100700' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-100700/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-100700' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0807 17:58:56.438766    2092 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0807 17:58:56.438766    2092 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0807 17:58:56.438900    2092 buildroot.go:174] setting up certificates
	I0807 17:58:56.438900    2092 provision.go:84] configureAuth start
	I0807 17:58:56.438900    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:58:58.655995    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:58:58.656961    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:58:58.657071    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:59:01.290427    2092 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:59:01.290427    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:01.290831    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:59:03.469316    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:59:03.469316    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:03.469551    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:59:06.075723    2092 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:59:06.075723    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:06.075723    2092 provision.go:143] copyHostCerts
	I0807 17:59:06.075723    2092 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0807 17:59:06.075723    2092 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0807 17:59:06.076549    2092 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0807 17:59:06.077992    2092 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0807 17:59:06.077992    2092 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0807 17:59:06.078322    2092 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0807 17:59:06.079146    2092 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0807 17:59:06.079146    2092 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0807 17:59:06.079980    2092 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0807 17:59:06.080688    2092 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-100700 san=[127.0.0.1 172.28.235.211 functional-100700 localhost minikube]
	I0807 17:59:06.262311    2092 provision.go:177] copyRemoteCerts
	I0807 17:59:06.274334    2092 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0807 17:59:06.274334    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:59:08.466099    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:59:08.466421    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:08.466494    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:59:11.061934    2092 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:59:11.061934    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:11.061934    2092 sshutil.go:53] new ssh client: &{IP:172.28.235.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-100700\id_rsa Username:docker}
	I0807 17:59:11.172314    2092 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8979173s)
	I0807 17:59:11.172848    2092 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0807 17:59:11.223362    2092 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0807 17:59:11.271809    2092 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0807 17:59:11.319487    2092 provision.go:87] duration metric: took 14.8803963s to configureAuth
	I0807 17:59:11.319487    2092 buildroot.go:189] setting minikube options for container-runtime
	I0807 17:59:11.320542    2092 config.go:182] Loaded profile config "functional-100700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 17:59:11.320588    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:59:13.493491    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:59:13.493491    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:13.493879    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:59:16.077977    2092 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:59:16.077977    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:16.088668    2092 main.go:141] libmachine: Using SSH client type: native
	I0807 17:59:16.088783    2092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.235.211 22 <nil> <nil>}
	I0807 17:59:16.088783    2092 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0807 17:59:16.217785    2092 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0807 17:59:16.217785    2092 buildroot.go:70] root file system type: tmpfs
	I0807 17:59:16.218443    2092 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0807 17:59:16.218443    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:59:18.421400    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:59:18.421400    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:18.421838    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:59:21.023576    2092 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:59:21.024581    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:21.030466    2092 main.go:141] libmachine: Using SSH client type: native
	I0807 17:59:21.031160    2092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.235.211 22 <nil> <nil>}
	I0807 17:59:21.031160    2092 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0807 17:59:21.200213    2092 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0807 17:59:21.200853    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:59:23.460840    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:59:23.460840    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:23.461413    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:59:26.144922    2092 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:59:26.144922    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:26.151032    2092 main.go:141] libmachine: Using SSH client type: native
	I0807 17:59:26.151032    2092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.235.211 22 <nil> <nil>}
	I0807 17:59:26.151032    2092 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0807 17:59:26.288578    2092 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0807 17:59:26.288578    2092 machine.go:97] duration metric: took 44.6749905s to provisionDockerMachine
	I0807 17:59:26.289136    2092 start.go:293] postStartSetup for "functional-100700" (driver="hyperv")
	I0807 17:59:26.289136    2092 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0807 17:59:26.303659    2092 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0807 17:59:26.303659    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:59:28.549324    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:59:28.550453    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:28.550627    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:59:31.258516    2092 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:59:31.258516    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:31.259399    2092 sshutil.go:53] new ssh client: &{IP:172.28.235.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-100700\id_rsa Username:docker}
	I0807 17:59:31.366436    2092 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.0626314s)
	I0807 17:59:31.378993    2092 ssh_runner.go:195] Run: cat /etc/os-release
	I0807 17:59:31.386284    2092 info.go:137] Remote host: Buildroot 2023.02.9
	I0807 17:59:31.386284    2092 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0807 17:59:31.386889    2092 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0807 17:59:31.387933    2092 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96602.pem -> 96602.pem in /etc/ssl/certs
	I0807 17:59:31.388907    2092 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\9660\hosts -> hosts in /etc/test/nested/copy/9660
	I0807 17:59:31.400272    2092 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/9660
	I0807 17:59:31.419285    2092 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96602.pem --> /etc/ssl/certs/96602.pem (1708 bytes)
	I0807 17:59:31.469876    2092 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\9660\hosts --> /etc/test/nested/copy/9660/hosts (40 bytes)
	I0807 17:59:31.522816    2092 start.go:296] duration metric: took 5.2336131s for postStartSetup
	I0807 17:59:31.522964    2092 fix.go:56] duration metric: took 52.7384232s for fixHost
	I0807 17:59:31.522964    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:59:33.801714    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:59:33.801714    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:33.802069    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:59:36.454493    2092 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:59:36.455616    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:36.460762    2092 main.go:141] libmachine: Using SSH client type: native
	I0807 17:59:36.461590    2092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.235.211 22 <nil> <nil>}
	I0807 17:59:36.461590    2092 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0807 17:59:36.584817    2092 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723053576.599616702
	
	I0807 17:59:36.584817    2092 fix.go:216] guest clock: 1723053576.599616702
	I0807 17:59:36.584817    2092 fix.go:229] Guest: 2024-08-07 17:59:36.599616702 +0000 UTC Remote: 2024-08-07 17:59:31.5229646 +0000 UTC m=+58.443653901 (delta=5.076652102s)
	I0807 17:59:36.584817    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:59:38.789376    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:59:38.789376    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:38.789376    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:59:41.408403    2092 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:59:41.408403    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:41.415021    2092 main.go:141] libmachine: Using SSH client type: native
	I0807 17:59:41.415132    2092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.235.211 22 <nil> <nil>}
	I0807 17:59:41.415132    2092 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1723053576
	I0807 17:59:41.554262    2092 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Aug  7 17:59:36 UTC 2024
	
	I0807 17:59:41.554342    2092 fix.go:236] clock set: Wed Aug  7 17:59:36 UTC 2024
	 (err=<nil>)
	I0807 17:59:41.554342    2092 start.go:83] releasing machines lock for "functional-100700", held for 1m2.7696728s
	I0807 17:59:41.554664    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:59:43.743629    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:59:43.743629    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:43.743690    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:59:46.402358    2092 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:59:46.402358    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:46.408259    2092 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0807 17:59:46.408354    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:59:46.418508    2092 ssh_runner.go:195] Run: cat /version.json
	I0807 17:59:46.418508    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:59:48.678664    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:59:48.678664    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:48.678947    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:59:48.679407    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:59:48.679407    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:48.679407    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:59:51.480814    2092 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:59:51.481012    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:51.481427    2092 sshutil.go:53] new ssh client: &{IP:172.28.235.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-100700\id_rsa Username:docker}
	I0807 17:59:51.507337    2092 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:59:51.507337    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:51.508062    2092 sshutil.go:53] new ssh client: &{IP:172.28.235.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-100700\id_rsa Username:docker}
	I0807 17:59:51.568433    2092 ssh_runner.go:235] Completed: cat /version.json: (5.149744s)
	I0807 17:59:51.580326    2092 ssh_runner.go:195] Run: systemctl --version
	I0807 17:59:51.588096    2092 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.1797709s)
	W0807 17:59:51.588200    2092 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0807 17:59:51.605332    2092 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0807 17:59:51.614105    2092 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0807 17:59:51.625622    2092 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0807 17:59:51.647469    2092 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0807 17:59:51.647469    2092 start.go:495] detecting cgroup driver to use...
	I0807 17:59:51.647469    2092 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0807 17:59:51.698634    2092 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	W0807 17:59:51.702217    2092 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0807 17:59:51.702712    2092 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0807 17:59:51.742110    2092 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0807 17:59:51.763294    2092 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0807 17:59:51.777226    2092 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0807 17:59:51.810899    2092 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0807 17:59:51.842846    2092 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0807 17:59:51.874465    2092 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0807 17:59:51.906374    2092 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0807 17:59:51.940856    2092 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0807 17:59:51.972522    2092 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0807 17:59:52.005392    2092 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0807 17:59:52.039394    2092 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0807 17:59:52.069956    2092 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0807 17:59:52.100774    2092 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 17:59:52.376248    2092 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0807 17:59:52.411639    2092 start.go:495] detecting cgroup driver to use...
	I0807 17:59:52.424848    2092 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0807 17:59:52.465672    2092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0807 17:59:52.507105    2092 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0807 17:59:52.559294    2092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0807 17:59:52.602621    2092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0807 17:59:52.628877    2092 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0807 17:59:52.677947    2092 ssh_runner.go:195] Run: which cri-dockerd
	I0807 17:59:52.696445    2092 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0807 17:59:52.713779    2092 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0807 17:59:52.759506    2092 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0807 17:59:53.063312    2092 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0807 17:59:53.341833    2092 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0807 17:59:53.341833    2092 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0807 17:59:53.390184    2092 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 17:59:53.669002    2092 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0807 18:01:05.110860    2092 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.440852s)
	I0807 18:01:05.123373    2092 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0807 18:01:05.210998    2092 out.go:177] 
	W0807 18:01:05.214928    2092 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 07 17:52:16 functional-100700 systemd[1]: Starting Docker Application Container Engine...
	Aug 07 17:52:16 functional-100700 dockerd[674]: time="2024-08-07T17:52:16.458341985Z" level=info msg="Starting up"
	Aug 07 17:52:16 functional-100700 dockerd[674]: time="2024-08-07T17:52:16.459483937Z" level=info msg="containerd not running, starting managed containerd"
	Aug 07 17:52:16 functional-100700 dockerd[674]: time="2024-08-07T17:52:16.460719594Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=680
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.493113277Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.523238457Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.523275259Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.523339562Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.523356263Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.523427766Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.523446067Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.523804083Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.523901688Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.523925089Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.523938689Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.524034194Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.524376109Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.527352746Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.527600157Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.528068478Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.528219485Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.528416294Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.528629904Z" level=info msg="metadata content store policy set" policy=shared
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.581871643Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.581973248Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.582080552Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.582105454Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.582123954Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.582283562Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.582887889Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583040296Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583147301Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583169102Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583185303Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583200004Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583228805Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583248006Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583336710Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583450015Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583471716Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583486017Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583527319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583544019Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583560020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583595322Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583708227Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583728328Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583743029Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583769930Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583818932Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583858834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583890635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583921137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583935837Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583972939Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.584001540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.584017041Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.584038742Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.584125046Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.584167548Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.584182949Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.584202250Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.584215250Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.584231651Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.584243051Z" level=info msg="NRI interface is disabled by configuration."
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.584478362Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.584610368Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.584865080Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.585009287Z" level=info msg="containerd successfully booted in 0.093145s"
	Aug 07 17:52:17 functional-100700 dockerd[674]: time="2024-08-07T17:52:17.539100050Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 07 17:52:17 functional-100700 dockerd[674]: time="2024-08-07T17:52:17.582623719Z" level=info msg="Loading containers: start."
	Aug 07 17:52:17 functional-100700 dockerd[674]: time="2024-08-07T17:52:17.757771440Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 07 17:52:17 functional-100700 dockerd[674]: time="2024-08-07T17:52:17.984107768Z" level=info msg="Loading containers: done."
	Aug 07 17:52:18 functional-100700 dockerd[674]: time="2024-08-07T17:52:18.005649861Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Aug 07 17:52:18 functional-100700 dockerd[674]: time="2024-08-07T17:52:18.006524396Z" level=info msg="Daemon has completed initialization"
	Aug 07 17:52:18 functional-100700 dockerd[674]: time="2024-08-07T17:52:18.114597281Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 07 17:52:18 functional-100700 dockerd[674]: time="2024-08-07T17:52:18.114734986Z" level=info msg="API listen on [::]:2376"
	Aug 07 17:52:18 functional-100700 systemd[1]: Started Docker Application Container Engine.
	Aug 07 17:52:49 functional-100700 systemd[1]: Stopping Docker Application Container Engine...
	Aug 07 17:52:49 functional-100700 dockerd[674]: time="2024-08-07T17:52:49.863488345Z" level=info msg="Processing signal 'terminated'"
	Aug 07 17:52:49 functional-100700 dockerd[674]: time="2024-08-07T17:52:49.866317260Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 07 17:52:49 functional-100700 dockerd[674]: time="2024-08-07T17:52:49.866749062Z" level=info msg="Daemon shutdown complete"
	Aug 07 17:52:49 functional-100700 dockerd[674]: time="2024-08-07T17:52:49.867048363Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 07 17:52:49 functional-100700 dockerd[674]: time="2024-08-07T17:52:49.867142864Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 07 17:52:50 functional-100700 systemd[1]: docker.service: Deactivated successfully.
	Aug 07 17:52:50 functional-100700 systemd[1]: Stopped Docker Application Container Engine.
	Aug 07 17:52:50 functional-100700 systemd[1]: Starting Docker Application Container Engine...
	Aug 07 17:52:50 functional-100700 dockerd[1088]: time="2024-08-07T17:52:50.924908641Z" level=info msg="Starting up"
	Aug 07 17:52:50 functional-100700 dockerd[1088]: time="2024-08-07T17:52:50.926025447Z" level=info msg="containerd not running, starting managed containerd"
	Aug 07 17:52:50 functional-100700 dockerd[1088]: time="2024-08-07T17:52:50.927064452Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1094
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.958194110Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.986326653Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.986364954Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.986401554Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.986436154Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.986479654Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.986495054Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.986720855Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.986821556Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.986844356Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.986855856Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.986880556Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.987134958Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.990330074Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.990438474Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.990847676Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.990948477Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.991014577Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.991067378Z" level=info msg="metadata content store policy set" policy=shared
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.991319879Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.991378979Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.991397879Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.991412779Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.991428979Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.991496580Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.992185983Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.992409884Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.992647286Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.992672686Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.992688386Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.992794286Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993149888Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993243389Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993271489Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993292489Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993307089Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993318789Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993338389Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993353889Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993377989Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993393189Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993409289Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993422089Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993433690Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993445490Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993457890Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993471990Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993490190Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993561890Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993582490Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993597890Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993619090Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993632991Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993644091Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993764891Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993878492Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993897692Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993910692Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.994016892Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.994112293Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.994155593Z" level=info msg="NRI interface is disabled by configuration."
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.994503995Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.994761996Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.994864197Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.995059098Z" level=info msg="containerd successfully booted in 0.037887s"
	Aug 07 17:52:51 functional-100700 dockerd[1088]: time="2024-08-07T17:52:51.976962789Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 07 17:52:52 functional-100700 dockerd[1088]: time="2024-08-07T17:52:52.014319979Z" level=info msg="Loading containers: start."
	Aug 07 17:52:52 functional-100700 dockerd[1088]: time="2024-08-07T17:52:52.153625988Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 07 17:52:52 functional-100700 dockerd[1088]: time="2024-08-07T17:52:52.280398732Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 07 17:52:52 functional-100700 dockerd[1088]: time="2024-08-07T17:52:52.383509956Z" level=info msg="Loading containers: done."
	Aug 07 17:52:52 functional-100700 dockerd[1088]: time="2024-08-07T17:52:52.407173376Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Aug 07 17:52:52 functional-100700 dockerd[1088]: time="2024-08-07T17:52:52.407304177Z" level=info msg="Daemon has completed initialization"
	Aug 07 17:52:52 functional-100700 dockerd[1088]: time="2024-08-07T17:52:52.460329447Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 07 17:52:52 functional-100700 dockerd[1088]: time="2024-08-07T17:52:52.460475947Z" level=info msg="API listen on [::]:2376"
	Aug 07 17:52:52 functional-100700 systemd[1]: Started Docker Application Container Engine.
	Aug 07 17:53:01 functional-100700 dockerd[1088]: time="2024-08-07T17:53:01.251394538Z" level=info msg="Processing signal 'terminated'"
	Aug 07 17:53:01 functional-100700 dockerd[1088]: time="2024-08-07T17:53:01.254204052Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 07 17:53:01 functional-100700 systemd[1]: Stopping Docker Application Container Engine...
	Aug 07 17:53:01 functional-100700 dockerd[1088]: time="2024-08-07T17:53:01.260155282Z" level=info msg="Daemon shutdown complete"
	Aug 07 17:53:01 functional-100700 dockerd[1088]: time="2024-08-07T17:53:01.260456984Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 07 17:53:01 functional-100700 dockerd[1088]: time="2024-08-07T17:53:01.260937686Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 07 17:53:02 functional-100700 systemd[1]: docker.service: Deactivated successfully.
	Aug 07 17:53:02 functional-100700 systemd[1]: Stopped Docker Application Container Engine.
	Aug 07 17:53:02 functional-100700 systemd[1]: Starting Docker Application Container Engine...
	Aug 07 17:53:02 functional-100700 dockerd[1438]: time="2024-08-07T17:53:02.321797079Z" level=info msg="Starting up"
	Aug 07 17:53:02 functional-100700 dockerd[1438]: time="2024-08-07T17:53:02.323692689Z" level=info msg="containerd not running, starting managed containerd"
	Aug 07 17:53:02 functional-100700 dockerd[1438]: time="2024-08-07T17:53:02.324967095Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1444
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.356801457Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.391549934Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.391620134Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.391684835Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.391704135Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.391750835Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.391768635Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.391946636Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.392117037Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.392142737Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.392156437Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.392185437Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.392310838Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.395280253Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.395439954Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.395604655Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.395701255Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.395733555Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.396053557Z" level=info msg="metadata content store policy set" policy=shared
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.396602860Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.396804261Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.396886161Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.396963461Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397040262Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397105662Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397382264Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397664265Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397760665Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397781966Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397796766Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397810066Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397829466Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397849466Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397864866Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397877866Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397890366Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397902266Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397930866Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398086167Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398131667Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398146767Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398159868Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398173168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398186368Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398199368Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398230968Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398291368Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398305068Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398318768Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398347068Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398379069Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398633370Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398774671Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398795271Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398837271Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.399058072Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.399114872Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.399134672Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.399145573Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.399188373Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.399202473Z" level=info msg="NRI interface is disabled by configuration."
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.399579475Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.399779376Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.399959877Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.400151978Z" level=info msg="containerd successfully booted in 0.045445s"
	Aug 07 17:53:03 functional-100700 dockerd[1438]: time="2024-08-07T17:53:03.371421015Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 07 17:53:06 functional-100700 dockerd[1438]: time="2024-08-07T17:53:06.638419724Z" level=info msg="Loading containers: start."
	Aug 07 17:53:06 functional-100700 dockerd[1438]: time="2024-08-07T17:53:06.762102252Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 07 17:53:06 functional-100700 dockerd[1438]: time="2024-08-07T17:53:06.878637045Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 07 17:53:06 functional-100700 dockerd[1438]: time="2024-08-07T17:53:06.979158756Z" level=info msg="Loading containers: done."
	Aug 07 17:53:07 functional-100700 dockerd[1438]: time="2024-08-07T17:53:07.006779696Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Aug 07 17:53:07 functional-100700 dockerd[1438]: time="2024-08-07T17:53:07.006939697Z" level=info msg="Daemon has completed initialization"
	Aug 07 17:53:07 functional-100700 dockerd[1438]: time="2024-08-07T17:53:07.050899521Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 07 17:53:07 functional-100700 dockerd[1438]: time="2024-08-07T17:53:07.051782725Z" level=info msg="API listen on [::]:2376"
	Aug 07 17:53:07 functional-100700 systemd[1]: Started Docker Application Container Engine.
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.457114123Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.457756947Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.457814849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.458624579Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.566929844Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.567055349Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.567075749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.567173853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.642620715Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.643094533Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.643146835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.647326788Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.647829406Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.647978912Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.648649436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.651222930Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.841899112Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.842260225Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.842357529Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.842730042Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.987249334Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.987542945Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.987581346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.987882057Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:17 functional-100700 dockerd[1444]: time="2024-08-07T17:53:17.071713287Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:17 functional-100700 dockerd[1444]: time="2024-08-07T17:53:17.072501915Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:17 functional-100700 dockerd[1444]: time="2024-08-07T17:53:17.072657920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:17 functional-100700 dockerd[1444]: time="2024-08-07T17:53:17.072778524Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:17 functional-100700 dockerd[1444]: time="2024-08-07T17:53:17.120557979Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:17 functional-100700 dockerd[1444]: time="2024-08-07T17:53:17.120838189Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:17 functional-100700 dockerd[1444]: time="2024-08-07T17:53:17.121035196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:17 functional-100700 dockerd[1444]: time="2024-08-07T17:53:17.121432210Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:39 functional-100700 dockerd[1444]: time="2024-08-07T17:53:39.836342825Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:39 functional-100700 dockerd[1444]: time="2024-08-07T17:53:39.836494127Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:39 functional-100700 dockerd[1444]: time="2024-08-07T17:53:39.836527028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:39 functional-100700 dockerd[1444]: time="2024-08-07T17:53:39.836919632Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:40 functional-100700 dockerd[1444]: time="2024-08-07T17:53:40.031505099Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:40 functional-100700 dockerd[1444]: time="2024-08-07T17:53:40.031971604Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:40 functional-100700 dockerd[1444]: time="2024-08-07T17:53:40.032036705Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:40 functional-100700 dockerd[1444]: time="2024-08-07T17:53:40.032230607Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:40 functional-100700 dockerd[1444]: time="2024-08-07T17:53:40.071740773Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:40 functional-100700 dockerd[1444]: time="2024-08-07T17:53:40.071807874Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:40 functional-100700 dockerd[1444]: time="2024-08-07T17:53:40.071821974Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:40 functional-100700 dockerd[1444]: time="2024-08-07T17:53:40.072043276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:40 functional-100700 dockerd[1444]: time="2024-08-07T17:53:40.388937110Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:40 functional-100700 dockerd[1444]: time="2024-08-07T17:53:40.389400016Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:40 functional-100700 dockerd[1444]: time="2024-08-07T17:53:40.389566918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:40 functional-100700 dockerd[1444]: time="2024-08-07T17:53:40.390025223Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:41 functional-100700 dockerd[1444]: time="2024-08-07T17:53:41.011330360Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:41 functional-100700 dockerd[1444]: time="2024-08-07T17:53:41.011407583Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:41 functional-100700 dockerd[1444]: time="2024-08-07T17:53:41.013870604Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:41 functional-100700 dockerd[1444]: time="2024-08-07T17:53:41.017327916Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:41 functional-100700 dockerd[1444]: time="2024-08-07T17:53:41.053458090Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:41 functional-100700 dockerd[1444]: time="2024-08-07T17:53:41.053872712Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:41 functional-100700 dockerd[1444]: time="2024-08-07T17:53:41.054119484Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:41 functional-100700 dockerd[1444]: time="2024-08-07T17:53:41.054800983Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:48 functional-100700 dockerd[1444]: time="2024-08-07T17:53:48.064470635Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:48 functional-100700 dockerd[1444]: time="2024-08-07T17:53:48.064566761Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:48 functional-100700 dockerd[1444]: time="2024-08-07T17:53:48.064584666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:48 functional-100700 dockerd[1444]: time="2024-08-07T17:53:48.064692294Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:48 functional-100700 dockerd[1444]: time="2024-08-07T17:53:48.363127500Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:48 functional-100700 dockerd[1444]: time="2024-08-07T17:53:48.363436282Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:48 functional-100700 dockerd[1444]: time="2024-08-07T17:53:48.363741263Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:48 functional-100700 dockerd[1444]: time="2024-08-07T17:53:48.364157974Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:51 functional-100700 dockerd[1438]: time="2024-08-07T17:53:51.247657321Z" level=info msg="ignoring event" container=9d5becd51e7186ec4acc242dcdf88a4327987ec048e0ee52430eac221c2fc0fe module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:53:51 functional-100700 dockerd[1444]: time="2024-08-07T17:53:51.250057633Z" level=info msg="shim disconnected" id=9d5becd51e7186ec4acc242dcdf88a4327987ec048e0ee52430eac221c2fc0fe namespace=moby
	Aug 07 17:53:51 functional-100700 dockerd[1444]: time="2024-08-07T17:53:51.250177263Z" level=warning msg="cleaning up after shim disconnected" id=9d5becd51e7186ec4acc242dcdf88a4327987ec048e0ee52430eac221c2fc0fe namespace=moby
	Aug 07 17:53:51 functional-100700 dockerd[1444]: time="2024-08-07T17:53:51.250194468Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:53:51 functional-100700 dockerd[1438]: time="2024-08-07T17:53:51.423182591Z" level=info msg="ignoring event" container=2a2add9d4c7dacc6804bc6f6e21cf8821ae80a5ef75964e66784019654fe3bc3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:53:51 functional-100700 dockerd[1444]: time="2024-08-07T17:53:51.423885070Z" level=info msg="shim disconnected" id=2a2add9d4c7dacc6804bc6f6e21cf8821ae80a5ef75964e66784019654fe3bc3 namespace=moby
	Aug 07 17:53:51 functional-100700 dockerd[1444]: time="2024-08-07T17:53:51.424164141Z" level=warning msg="cleaning up after shim disconnected" id=2a2add9d4c7dacc6804bc6f6e21cf8821ae80a5ef75964e66784019654fe3bc3 namespace=moby
	Aug 07 17:53:51 functional-100700 dockerd[1444]: time="2024-08-07T17:53:51.424226557Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:42 functional-100700 systemd[1]: Stopping Docker Application Container Engine...
	Aug 07 17:55:42 functional-100700 dockerd[1438]: time="2024-08-07T17:55:42.811970533Z" level=info msg="Processing signal 'terminated'"
	Aug 07 17:55:43 functional-100700 dockerd[1438]: time="2024-08-07T17:55:43.080390772Z" level=info msg="ignoring event" container=f87ac0281bc2649782f39abdff8151e0c016bf26b3182a3df5c8579e21fe65d7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.081622815Z" level=info msg="shim disconnected" id=f87ac0281bc2649782f39abdff8151e0c016bf26b3182a3df5c8579e21fe65d7 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.082180634Z" level=warning msg="cleaning up after shim disconnected" id=f87ac0281bc2649782f39abdff8151e0c016bf26b3182a3df5c8579e21fe65d7 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.082393841Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1438]: time="2024-08-07T17:55:43.098416193Z" level=info msg="ignoring event" container=b9283200bae35965a0ee1c5549a6004ce0c7d70f70b9a374e41cd065d4d5dec3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.099709637Z" level=info msg="shim disconnected" id=b9283200bae35965a0ee1c5549a6004ce0c7d70f70b9a374e41cd065d4d5dec3 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.099819341Z" level=warning msg="cleaning up after shim disconnected" id=b9283200bae35965a0ee1c5549a6004ce0c7d70f70b9a374e41cd065d4d5dec3 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.099888243Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.121832799Z" level=info msg="shim disconnected" id=03079679d68cc19538e02f39c48251b50393f7dd981e609af3dcc96c420cd29e namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.121978304Z" level=warning msg="cleaning up after shim disconnected" id=03079679d68cc19538e02f39c48251b50393f7dd981e609af3dcc96c420cd29e namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.122200511Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.122413919Z" level=info msg="shim disconnected" id=1ca5873cb027b5d4205ca4cdff1109b4f47c0b8c5f18ae986a0b932c8d9584f4 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.122477321Z" level=warning msg="cleaning up after shim disconnected" id=1ca5873cb027b5d4205ca4cdff1109b4f47c0b8c5f18ae986a0b932c8d9584f4 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.122491521Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1438]: time="2024-08-07T17:55:43.122750830Z" level=info msg="ignoring event" container=1ca5873cb027b5d4205ca4cdff1109b4f47c0b8c5f18ae986a0b932c8d9584f4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:55:43 functional-100700 dockerd[1438]: time="2024-08-07T17:55:43.122822433Z" level=info msg="ignoring event" container=03079679d68cc19538e02f39c48251b50393f7dd981e609af3dcc96c420cd29e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.132577069Z" level=info msg="shim disconnected" id=9f7b90986285c55035e0f34c86f5eaccab75a819c487d2fb0ea04853921c13bf namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.132832477Z" level=warning msg="cleaning up after shim disconnected" id=9f7b90986285c55035e0f34c86f5eaccab75a819c487d2fb0ea04853921c13bf namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.132974882Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.138866285Z" level=info msg="shim disconnected" id=f907706c00ebe4149920447890732b438dd707eb66023282c7b66ac64e2185b5 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.139038791Z" level=warning msg="cleaning up after shim disconnected" id=f907706c00ebe4149920447890732b438dd707eb66023282c7b66ac64e2185b5 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.139187396Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.155113444Z" level=info msg="shim disconnected" id=a334b1535e2f7b257588b80636584105387ba212b623a89972b0a362d62c0504 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.155224248Z" level=warning msg="cleaning up after shim disconnected" id=a334b1535e2f7b257588b80636584105387ba212b623a89972b0a362d62c0504 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.155238049Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1438]: time="2024-08-07T17:55:43.155389654Z" level=info msg="ignoring event" container=f907706c00ebe4149920447890732b438dd707eb66023282c7b66ac64e2185b5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:55:43 functional-100700 dockerd[1438]: time="2024-08-07T17:55:43.155569360Z" level=info msg="ignoring event" container=9f7b90986285c55035e0f34c86f5eaccab75a819c487d2fb0ea04853921c13bf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:55:43 functional-100700 dockerd[1438]: time="2024-08-07T17:55:43.155886271Z" level=info msg="ignoring event" container=a334b1535e2f7b257588b80636584105387ba212b623a89972b0a362d62c0504 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.169713847Z" level=info msg="shim disconnected" id=32e3bea2a931545b3b3164d713e35af3f8439f208d9a2909552dc971a61ca84a namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.170314567Z" level=warning msg="cleaning up after shim disconnected" id=32e3bea2a931545b3b3164d713e35af3f8439f208d9a2909552dc971a61ca84a namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.170672780Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1438]: time="2024-08-07T17:55:43.183709228Z" level=info msg="ignoring event" container=32e3bea2a931545b3b3164d713e35af3f8439f208d9a2909552dc971a61ca84a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:55:43 functional-100700 dockerd[1438]: time="2024-08-07T17:55:43.186931639Z" level=info msg="ignoring event" container=8e6d65d222ddafeb197a6bdeeb312ce4cb757e9ed4da126acc744cc3b9f1550e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.187130746Z" level=info msg="shim disconnected" id=8e6d65d222ddafeb197a6bdeeb312ce4cb757e9ed4da126acc744cc3b9f1550e namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.187455057Z" level=warning msg="cleaning up after shim disconnected" id=8e6d65d222ddafeb197a6bdeeb312ce4cb757e9ed4da126acc744cc3b9f1550e namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.187626563Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.203552411Z" level=info msg="shim disconnected" id=6f09e3713754243813d9c0717cdb77e8bedfc4c511d31490f7ba369a8d1bcc06 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.204052129Z" level=warning msg="cleaning up after shim disconnected" id=6f09e3713754243813d9c0717cdb77e8bedfc4c511d31490f7ba369a8d1bcc06 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.204312938Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1438]: time="2024-08-07T17:55:43.210829062Z" level=info msg="ignoring event" container=88ef6e03a7d49636d4ec885944dcc92b4c233c0740da2e8f511ce9078e817eb9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:55:43 functional-100700 dockerd[1438]: time="2024-08-07T17:55:43.210944966Z" level=info msg="ignoring event" container=6f09e3713754243813d9c0717cdb77e8bedfc4c511d31490f7ba369a8d1bcc06 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.210804861Z" level=info msg="shim disconnected" id=88ef6e03a7d49636d4ec885944dcc92b4c233c0740da2e8f511ce9078e817eb9 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.211582688Z" level=warning msg="cleaning up after shim disconnected" id=88ef6e03a7d49636d4ec885944dcc92b4c233c0740da2e8f511ce9078e817eb9 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.211710392Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.243702093Z" level=info msg="shim disconnected" id=4f7e1db775dc208f8832a04d1b684f4ab8f803d4f68869549effeeb024f11b10 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1438]: time="2024-08-07T17:55:43.244412118Z" level=info msg="ignoring event" container=4f7e1db775dc208f8832a04d1b684f4ab8f803d4f68869549effeeb024f11b10 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.248240650Z" level=warning msg="cleaning up after shim disconnected" id=4f7e1db775dc208f8832a04d1b684f4ab8f803d4f68869549effeeb024f11b10 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.248409455Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:47 functional-100700 dockerd[1438]: time="2024-08-07T17:55:47.969145341Z" level=info msg="ignoring event" container=8257548df8d0db4a3326d585a67cf2b31ec7c8b63a2fb41d0009d9c5fa124c3b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:55:47 functional-100700 dockerd[1444]: time="2024-08-07T17:55:47.970761397Z" level=info msg="shim disconnected" id=8257548df8d0db4a3326d585a67cf2b31ec7c8b63a2fb41d0009d9c5fa124c3b namespace=moby
	Aug 07 17:55:47 functional-100700 dockerd[1444]: time="2024-08-07T17:55:47.971219213Z" level=warning msg="cleaning up after shim disconnected" id=8257548df8d0db4a3326d585a67cf2b31ec7c8b63a2fb41d0009d9c5fa124c3b namespace=moby
	Aug 07 17:55:47 functional-100700 dockerd[1444]: time="2024-08-07T17:55:47.973093477Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:52 functional-100700 dockerd[1438]: time="2024-08-07T17:55:52.961783600Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=76120dfe1c32eb89c2bb01fc16ebaad087df9e517d03ed5169fb50db579cfe65
	Aug 07 17:55:53 functional-100700 dockerd[1444]: time="2024-08-07T17:55:53.004669561Z" level=info msg="shim disconnected" id=76120dfe1c32eb89c2bb01fc16ebaad087df9e517d03ed5169fb50db579cfe65 namespace=moby
	Aug 07 17:55:53 functional-100700 dockerd[1444]: time="2024-08-07T17:55:53.004855260Z" level=warning msg="cleaning up after shim disconnected" id=76120dfe1c32eb89c2bb01fc16ebaad087df9e517d03ed5169fb50db579cfe65 namespace=moby
	Aug 07 17:55:53 functional-100700 dockerd[1444]: time="2024-08-07T17:55:53.004935360Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:53 functional-100700 dockerd[1438]: time="2024-08-07T17:55:53.005290559Z" level=info msg="ignoring event" container=76120dfe1c32eb89c2bb01fc16ebaad087df9e517d03ed5169fb50db579cfe65 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:55:53 functional-100700 dockerd[1438]: time="2024-08-07T17:55:53.082366606Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 07 17:55:53 functional-100700 dockerd[1438]: time="2024-08-07T17:55:53.082484305Z" level=info msg="Daemon shutdown complete"
	Aug 07 17:55:53 functional-100700 dockerd[1438]: time="2024-08-07T17:55:53.082557905Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 07 17:55:53 functional-100700 dockerd[1438]: time="2024-08-07T17:55:53.082900804Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 07 17:55:54 functional-100700 systemd[1]: docker.service: Deactivated successfully.
	Aug 07 17:55:54 functional-100700 systemd[1]: Stopped Docker Application Container Engine.
	Aug 07 17:55:54 functional-100700 systemd[1]: docker.service: Consumed 6.104s CPU time.
	Aug 07 17:55:54 functional-100700 systemd[1]: Starting Docker Application Container Engine...
	Aug 07 17:55:54 functional-100700 dockerd[4430]: time="2024-08-07T17:55:54.151963549Z" level=info msg="Starting up"
	Aug 07 17:55:54 functional-100700 dockerd[4430]: time="2024-08-07T17:55:54.153307848Z" level=info msg="containerd not running, starting managed containerd"
	Aug 07 17:55:54 functional-100700 dockerd[4430]: time="2024-08-07T17:55:54.154476946Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=4437
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.189116214Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.217487588Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.217672488Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.217724888Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.217743588Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.217923788Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.218099587Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.218341687Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.218462487Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.218487087Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.218500687Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.218536087Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.218676087Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.221803184Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.221937684Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.222170584Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.222318784Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.222351783Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.222377583Z" level=info msg="metadata content store policy set" policy=shared
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.222707583Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.222769383Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.222791383Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.222824883Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.222843583Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.223037183Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.223447282Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.223700782Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224128482Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224153982Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224211182Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224225882Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224249282Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224283382Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224302582Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224321982Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224336982Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224348882Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224375782Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224415382Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224431982Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224446182Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224464282Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224487782Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224502981Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224516181Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224556381Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224572481Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224585181Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224597481Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224611681Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224628781Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224653481Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224668081Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224681281Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.225080281Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.225134081Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.225150081Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.225165281Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.225176081Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.225190081Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.225200981Z" level=info msg="NRI interface is disabled by configuration."
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.225570080Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.225664580Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.225760880Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.225784780Z" level=info msg="containerd successfully booted in 0.038263s"
	Aug 07 17:55:55 functional-100700 dockerd[4430]: time="2024-08-07T17:55:55.203623721Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 07 17:55:55 functional-100700 dockerd[4430]: time="2024-08-07T17:55:55.249075279Z" level=info msg="Loading containers: start."
	Aug 07 17:55:55 functional-100700 dockerd[4430]: time="2024-08-07T17:55:55.486283283Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 07 17:55:55 functional-100700 dockerd[4430]: time="2024-08-07T17:55:55.611181043Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 07 17:55:55 functional-100700 dockerd[4430]: time="2024-08-07T17:55:55.728226393Z" level=info msg="Loading containers: done."
	Aug 07 17:55:55 functional-100700 dockerd[4430]: time="2024-08-07T17:55:55.754038026Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Aug 07 17:55:55 functional-100700 dockerd[4430]: time="2024-08-07T17:55:55.754150926Z" level=info msg="Daemon has completed initialization"
	Aug 07 17:55:55 functional-100700 dockerd[4430]: time="2024-08-07T17:55:55.805054292Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 07 17:55:55 functional-100700 systemd[1]: Started Docker Application Container Engine.
	Aug 07 17:55:55 functional-100700 dockerd[4430]: time="2024-08-07T17:55:55.817756408Z" level=info msg="API listen on [::]:2376"
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.526494067Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.526577168Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.526591169Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.526693470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.558712297Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.563728963Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.563952066Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.564587075Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.615923059Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.616533767Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.616742570Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.616989273Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.649839111Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.650906025Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.651080528Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.651280130Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.002162008Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.002309810Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.002327411Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.002784217Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.146319020Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.146402021Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.146419521Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.146546323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.186224804Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.186289605Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.186312905Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.186429907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.293246071Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.293400074Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.293416674Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.293513875Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:08 functional-100700 dockerd[4437]: time="2024-08-07T17:56:08.342920003Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:08 functional-100700 dockerd[4437]: time="2024-08-07T17:56:08.345412453Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:08 functional-100700 dockerd[4437]: time="2024-08-07T17:56:08.345440953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:08 functional-100700 dockerd[4437]: time="2024-08-07T17:56:08.346309071Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:08 functional-100700 dockerd[4437]: time="2024-08-07T17:56:08.427619805Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:08 functional-100700 dockerd[4437]: time="2024-08-07T17:56:08.427935011Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:08 functional-100700 dockerd[4437]: time="2024-08-07T17:56:08.427958412Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:08 functional-100700 dockerd[4437]: time="2024-08-07T17:56:08.428175716Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:08 functional-100700 dockerd[4437]: time="2024-08-07T17:56:08.450251060Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:08 functional-100700 dockerd[4437]: time="2024-08-07T17:56:08.450326762Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:08 functional-100700 dockerd[4437]: time="2024-08-07T17:56:08.450344662Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:08 functional-100700 dockerd[4437]: time="2024-08-07T17:56:08.450438364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:09 functional-100700 dockerd[4437]: time="2024-08-07T17:56:09.021378960Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:09 functional-100700 dockerd[4437]: time="2024-08-07T17:56:09.021447242Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:09 functional-100700 dockerd[4437]: time="2024-08-07T17:56:09.021467036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:09 functional-100700 dockerd[4437]: time="2024-08-07T17:56:09.021664985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:09 functional-100700 dockerd[4437]: time="2024-08-07T17:56:09.032269201Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:09 functional-100700 dockerd[4437]: time="2024-08-07T17:56:09.032481345Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:09 functional-100700 dockerd[4437]: time="2024-08-07T17:56:09.033742514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:09 functional-100700 dockerd[4437]: time="2024-08-07T17:56:09.034300967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:09 functional-100700 dockerd[4437]: time="2024-08-07T17:56:09.230710505Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:09 functional-100700 dockerd[4437]: time="2024-08-07T17:56:09.231303050Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:09 functional-100700 dockerd[4437]: time="2024-08-07T17:56:09.231404523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:09 functional-100700 dockerd[4437]: time="2024-08-07T17:56:09.231887696Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:59:53 functional-100700 systemd[1]: Stopping Docker Application Container Engine...
	Aug 07 17:59:53 functional-100700 dockerd[4430]: time="2024-08-07T17:59:53.701240101Z" level=info msg="Processing signal 'terminated'"
	Aug 07 17:59:53 functional-100700 dockerd[4430]: time="2024-08-07T17:59:53.891480424Z" level=info msg="ignoring event" container=011ec8239aa1c51c9f4776f7bfc897d8a3292c8e36986f970b1e2c2ca9da86fd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:59:53 functional-100700 dockerd[4430]: time="2024-08-07T17:59:53.891549927Z" level=info msg="ignoring event" container=b39dd511076447f842f8327dca684bf0a48d714c0f66e22029b88d889e068b15 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.892313355Z" level=info msg="shim disconnected" id=b39dd511076447f842f8327dca684bf0a48d714c0f66e22029b88d889e068b15 namespace=moby
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.892402158Z" level=warning msg="cleaning up after shim disconnected" id=b39dd511076447f842f8327dca684bf0a48d714c0f66e22029b88d889e068b15 namespace=moby
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.892417259Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.892816273Z" level=info msg="shim disconnected" id=011ec8239aa1c51c9f4776f7bfc897d8a3292c8e36986f970b1e2c2ca9da86fd namespace=moby
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.893079383Z" level=warning msg="cleaning up after shim disconnected" id=011ec8239aa1c51c9f4776f7bfc897d8a3292c8e36986f970b1e2c2ca9da86fd namespace=moby
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.893229989Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:59:53 functional-100700 dockerd[4430]: time="2024-08-07T17:59:53.943367240Z" level=info msg="ignoring event" container=333ea0f6bde6b1ef97dfdfac80a9d448a8898827c7df556176cd4ea6cc523a33 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.943759054Z" level=info msg="shim disconnected" id=333ea0f6bde6b1ef97dfdfac80a9d448a8898827c7df556176cd4ea6cc523a33 namespace=moby
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.943822956Z" level=warning msg="cleaning up after shim disconnected" id=333ea0f6bde6b1ef97dfdfac80a9d448a8898827c7df556176cd4ea6cc523a33 namespace=moby
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.943835757Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.963273574Z" level=info msg="shim disconnected" id=ee73743ff4167ab913696e97e9a77ab9a2b23dba89c1348473bb28a7089d45c2 namespace=moby
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.963426780Z" level=warning msg="cleaning up after shim disconnected" id=ee73743ff4167ab913696e97e9a77ab9a2b23dba89c1348473bb28a7089d45c2 namespace=moby
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.963795094Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:59:53 functional-100700 dockerd[4430]: time="2024-08-07T17:59:53.980683817Z" level=info msg="ignoring event" container=ee73743ff4167ab913696e97e9a77ab9a2b23dba89c1348473bb28a7089d45c2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:59:53 functional-100700 dockerd[4430]: time="2024-08-07T17:59:53.981327041Z" level=info msg="ignoring event" container=d57a72e940a3aff58bc3f9442b486233311d0f1b0b85603bed6541d03b52640b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:59:53 functional-100700 dockerd[4430]: time="2024-08-07T17:59:53.981517248Z" level=info msg="ignoring event" container=ff33a3021f789dbad1bd68bc673700c1dc540ec4a83d74c98c4915f4c66932e7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.983088406Z" level=info msg="shim disconnected" id=d57a72e940a3aff58bc3f9442b486233311d0f1b0b85603bed6541d03b52640b namespace=moby
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.983163809Z" level=warning msg="cleaning up after shim disconnected" id=d57a72e940a3aff58bc3f9442b486233311d0f1b0b85603bed6541d03b52640b namespace=moby
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.983176709Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4430]: time="2024-08-07T17:59:54.002058106Z" level=info msg="ignoring event" container=60d38309b3f444639ea52ae1b7c219816eaed13bbb7eb6ef22b5ecb5ee8fc804 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.002564025Z" level=info msg="shim disconnected" id=60d38309b3f444639ea52ae1b7c219816eaed13bbb7eb6ef22b5ecb5ee8fc804 namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.003062843Z" level=warning msg="cleaning up after shim disconnected" id=60d38309b3f444639ea52ae1b7c219816eaed13bbb7eb6ef22b5ecb5ee8fc804 namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.009295273Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.013796640Z" level=info msg="shim disconnected" id=623399e23aa3ba701f8a621854544b48498da36bba773f8911042b6d2561b57c namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.019740659Z" level=warning msg="cleaning up after shim disconnected" id=623399e23aa3ba701f8a621854544b48498da36bba773f8911042b6d2561b57c namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.019785161Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.008834456Z" level=info msg="shim disconnected" id=ff33a3021f789dbad1bd68bc673700c1dc540ec4a83d74c98c4915f4c66932e7 namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.021492824Z" level=warning msg="cleaning up after shim disconnected" id=ff33a3021f789dbad1bd68bc673700c1dc540ec4a83d74c98c4915f4c66932e7 namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.021550026Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4430]: time="2024-08-07T17:59:54.031232683Z" level=info msg="ignoring event" container=a781cd4bdb89586cb36144096a96bd794ac5f7417bddb500925aa03f7a83ee76 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:59:54 functional-100700 dockerd[4430]: time="2024-08-07T17:59:54.031289685Z" level=info msg="ignoring event" container=241a3e17f71d6549f6693cc1faa13d1a82113d0126d862a38ff582ea671e7704 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:59:54 functional-100700 dockerd[4430]: time="2024-08-07T17:59:54.031323187Z" level=info msg="ignoring event" container=0a22b12bc48412adcefcda1aa5f0176acba34d5ef617a5fc6e81727d4bf2fb9f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:59:54 functional-100700 dockerd[4430]: time="2024-08-07T17:59:54.031338987Z" level=info msg="ignoring event" container=125a3650c7bb9ceb749c5379c1700ef34f0671035364a39c7f8a55a9e244527f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:59:54 functional-100700 dockerd[4430]: time="2024-08-07T17:59:54.031357688Z" level=info msg="ignoring event" container=623399e23aa3ba701f8a621854544b48498da36bba773f8911042b6d2561b57c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.016580642Z" level=info msg="shim disconnected" id=a781cd4bdb89586cb36144096a96bd794ac5f7417bddb500925aa03f7a83ee76 namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.033182755Z" level=info msg="shim disconnected" id=125a3650c7bb9ceb749c5379c1700ef34f0671035364a39c7f8a55a9e244527f namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.035991259Z" level=warning msg="cleaning up after shim disconnected" id=125a3650c7bb9ceb749c5379c1700ef34f0671035364a39c7f8a55a9e244527f namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.036075162Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.036361773Z" level=warning msg="cleaning up after shim disconnected" id=a781cd4bdb89586cb36144096a96bd794ac5f7417bddb500925aa03f7a83ee76 namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.036396774Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.015009784Z" level=info msg="shim disconnected" id=241a3e17f71d6549f6693cc1faa13d1a82113d0126d862a38ff582ea671e7704 namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.040268717Z" level=warning msg="cleaning up after shim disconnected" id=241a3e17f71d6549f6693cc1faa13d1a82113d0126d862a38ff582ea671e7704 namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.040483025Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.017838089Z" level=info msg="shim disconnected" id=0a22b12bc48412adcefcda1aa5f0176acba34d5ef617a5fc6e81727d4bf2fb9f namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.056017998Z" level=warning msg="cleaning up after shim disconnected" id=0a22b12bc48412adcefcda1aa5f0176acba34d5ef617a5fc6e81727d4bf2fb9f namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.056073300Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:59:58 functional-100700 dockerd[4430]: time="2024-08-07T17:59:58.843549639Z" level=info msg="ignoring event" container=3bc20896cf9b1f1d0380b658e8e185cf6d0bedfce8143c9b50eb86a711c71a45 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:59:58 functional-100700 dockerd[4437]: time="2024-08-07T17:59:58.844293967Z" level=info msg="shim disconnected" id=3bc20896cf9b1f1d0380b658e8e185cf6d0bedfce8143c9b50eb86a711c71a45 namespace=moby
	Aug 07 17:59:58 functional-100700 dockerd[4437]: time="2024-08-07T17:59:58.844701482Z" level=warning msg="cleaning up after shim disconnected" id=3bc20896cf9b1f1d0380b658e8e185cf6d0bedfce8143c9b50eb86a711c71a45 namespace=moby
	Aug 07 17:59:58 functional-100700 dockerd[4437]: time="2024-08-07T17:59:58.845283503Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 18:00:03 functional-100700 dockerd[4430]: time="2024-08-07T18:00:03.891107278Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=ceb9a86ed09ccf71c371818d759e450667e70979c75506a5233093a4e3efe077
	Aug 07 18:00:03 functional-100700 dockerd[4430]: time="2024-08-07T18:00:03.954477534Z" level=info msg="ignoring event" container=ceb9a86ed09ccf71c371818d759e450667e70979c75506a5233093a4e3efe077 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 18:00:03 functional-100700 dockerd[4437]: time="2024-08-07T18:00:03.957207011Z" level=info msg="shim disconnected" id=ceb9a86ed09ccf71c371818d759e450667e70979c75506a5233093a4e3efe077 namespace=moby
	Aug 07 18:00:03 functional-100700 dockerd[4437]: time="2024-08-07T18:00:03.957346010Z" level=warning msg="cleaning up after shim disconnected" id=ceb9a86ed09ccf71c371818d759e450667e70979c75506a5233093a4e3efe077 namespace=moby
	Aug 07 18:00:03 functional-100700 dockerd[4437]: time="2024-08-07T18:00:03.957506408Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 18:00:04 functional-100700 dockerd[4430]: time="2024-08-07T18:00:04.021732016Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 07 18:00:04 functional-100700 dockerd[4430]: time="2024-08-07T18:00:04.022302513Z" level=info msg="Daemon shutdown complete"
	Aug 07 18:00:04 functional-100700 dockerd[4430]: time="2024-08-07T18:00:04.022522311Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 07 18:00:04 functional-100700 dockerd[4430]: time="2024-08-07T18:00:04.022549211Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 07 18:00:05 functional-100700 systemd[1]: docker.service: Deactivated successfully.
	Aug 07 18:00:05 functional-100700 systemd[1]: Stopped Docker Application Container Engine.
	Aug 07 18:00:05 functional-100700 systemd[1]: docker.service: Consumed 8.144s CPU time.
	Aug 07 18:00:05 functional-100700 systemd[1]: Starting Docker Application Container Engine...
	Aug 07 18:00:05 functional-100700 dockerd[8031]: time="2024-08-07T18:00:05.087273572Z" level=info msg="Starting up"
	Aug 07 18:01:05 functional-100700 dockerd[8031]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 07 18:01:05 functional-100700 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 07 18:01:05 functional-100700 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 07 18:01:05 functional-100700 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0807 18:01:05.216074    2092 out.go:239] * 
	W0807 18:01:05.217856    2092 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0807 18:01:05.222151    2092 out.go:177] 
	
	
	==> Docker <==
	Aug 07 18:03:05 functional-100700 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 07 18:03:05 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:03:05Z" level=error msg="Set backoffDuration to : 1m0s for container ID '9f7b90986285c55035e0f34c86f5eaccab75a819c487d2fb0ea04853921c13bf'"
	Aug 07 18:03:05 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:03:05Z" level=error msg="error getting RW layer size for container ID '3bc20896cf9b1f1d0380b658e8e185cf6d0bedfce8143c9b50eb86a711c71a45': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/3bc20896cf9b1f1d0380b658e8e185cf6d0bedfce8143c9b50eb86a711c71a45/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:03:05 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:03:05Z" level=error msg="Set backoffDuration to : 1m0s for container ID '3bc20896cf9b1f1d0380b658e8e185cf6d0bedfce8143c9b50eb86a711c71a45'"
	Aug 07 18:03:05 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:03:05Z" level=error msg="error getting RW layer size for container ID '03079679d68cc19538e02f39c48251b50393f7dd981e609af3dcc96c420cd29e': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/03079679d68cc19538e02f39c48251b50393f7dd981e609af3dcc96c420cd29e/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:03:05 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:03:05Z" level=error msg="Set backoffDuration to : 1m0s for container ID '03079679d68cc19538e02f39c48251b50393f7dd981e609af3dcc96c420cd29e'"
	Aug 07 18:03:05 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:03:05Z" level=error msg="error getting RW layer size for container ID 'a781cd4bdb89586cb36144096a96bd794ac5f7417bddb500925aa03f7a83ee76': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/a781cd4bdb89586cb36144096a96bd794ac5f7417bddb500925aa03f7a83ee76/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:03:05 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:03:05Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'a781cd4bdb89586cb36144096a96bd794ac5f7417bddb500925aa03f7a83ee76'"
	Aug 07 18:03:05 functional-100700 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 07 18:03:05 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:03:05Z" level=error msg="error getting RW layer size for container ID '60d38309b3f444639ea52ae1b7c219816eaed13bbb7eb6ef22b5ecb5ee8fc804': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/60d38309b3f444639ea52ae1b7c219816eaed13bbb7eb6ef22b5ecb5ee8fc804/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:03:05 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:03:05Z" level=error msg="Set backoffDuration to : 1m0s for container ID '60d38309b3f444639ea52ae1b7c219816eaed13bbb7eb6ef22b5ecb5ee8fc804'"
	Aug 07 18:03:05 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:03:05Z" level=error msg="error getting RW layer size for container ID '8257548df8d0db4a3326d585a67cf2b31ec7c8b63a2fb41d0009d9c5fa124c3b': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/8257548df8d0db4a3326d585a67cf2b31ec7c8b63a2fb41d0009d9c5fa124c3b/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:03:05 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:03:05Z" level=error msg="Set backoffDuration to : 1m0s for container ID '8257548df8d0db4a3326d585a67cf2b31ec7c8b63a2fb41d0009d9c5fa124c3b'"
	Aug 07 18:03:05 functional-100700 systemd[1]: Failed to start Docker Application Container Engine.
	Aug 07 18:03:05 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:03:05Z" level=error msg="error getting RW layer size for container ID '623399e23aa3ba701f8a621854544b48498da36bba773f8911042b6d2561b57c': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/623399e23aa3ba701f8a621854544b48498da36bba773f8911042b6d2561b57c/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:03:05 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:03:05Z" level=error msg="error getting RW layer size for container ID 'd57a72e940a3aff58bc3f9442b486233311d0f1b0b85603bed6541d03b52640b': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/d57a72e940a3aff58bc3f9442b486233311d0f1b0b85603bed6541d03b52640b/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:03:05 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:03:05Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'd57a72e940a3aff58bc3f9442b486233311d0f1b0b85603bed6541d03b52640b'"
	Aug 07 18:03:05 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:03:05Z" level=error msg="Set backoffDuration to : 1m0s for container ID '623399e23aa3ba701f8a621854544b48498da36bba773f8911042b6d2561b57c'"
	Aug 07 18:03:05 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:03:05Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get image list from docker"
	Aug 07 18:03:05 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:03:05Z" level=error msg="error getting RW layer size for container ID '76120dfe1c32eb89c2bb01fc16ebaad087df9e517d03ed5169fb50db579cfe65': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/76120dfe1c32eb89c2bb01fc16ebaad087df9e517d03ed5169fb50db579cfe65/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:03:05 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:03:05Z" level=error msg="Set backoffDuration to : 1m0s for container ID '76120dfe1c32eb89c2bb01fc16ebaad087df9e517d03ed5169fb50db579cfe65'"
	Aug 07 18:03:05 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:03:05Z" level=error msg="error getting RW layer size for container ID '88ef6e03a7d49636d4ec885944dcc92b4c233c0740da2e8f511ce9078e817eb9': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/88ef6e03a7d49636d4ec885944dcc92b4c233c0740da2e8f511ce9078e817eb9/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:03:05 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:03:05Z" level=error msg="Set backoffDuration to : 1m0s for container ID '88ef6e03a7d49636d4ec885944dcc92b4c233c0740da2e8f511ce9078e817eb9'"
	Aug 07 18:03:05 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:03:05Z" level=error msg="error getting RW layer size for container ID '1ca5873cb027b5d4205ca4cdff1109b4f47c0b8c5f18ae986a0b932c8d9584f4': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/1ca5873cb027b5d4205ca4cdff1109b4f47c0b8c5f18ae986a0b932c8d9584f4/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:03:05 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:03:05Z" level=error msg="Set backoffDuration to : 1m0s for container ID '1ca5873cb027b5d4205ca4cdff1109b4f47c0b8c5f18ae986a0b932c8d9584f4'"
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-08-07T18:03:05Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unknown desc = failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +15.852357] systemd-fstab-generator[2536]: Ignoring "noauto" option for root device
	[  +0.197291] kauditd_printk_skb: 12 callbacks suppressed
	[  +8.405097] kauditd_printk_skb: 88 callbacks suppressed
	[Aug 7 17:54] kauditd_printk_skb: 10 callbacks suppressed
	[Aug 7 17:55] systemd-fstab-generator[3950]: Ignoring "noauto" option for root device
	[  +0.649998] systemd-fstab-generator[3985]: Ignoring "noauto" option for root device
	[  +0.264046] systemd-fstab-generator[3997]: Ignoring "noauto" option for root device
	[  +0.315437] systemd-fstab-generator[4011]: Ignoring "noauto" option for root device
	[  +5.331377] kauditd_printk_skb: 89 callbacks suppressed
	[  +8.070413] systemd-fstab-generator[4649]: Ignoring "noauto" option for root device
	[  +0.207492] systemd-fstab-generator[4660]: Ignoring "noauto" option for root device
	[  +0.210288] systemd-fstab-generator[4672]: Ignoring "noauto" option for root device
	[  +0.283847] systemd-fstab-generator[4687]: Ignoring "noauto" option for root device
	[  +0.989188] systemd-fstab-generator[4863]: Ignoring "noauto" option for root device
	[Aug 7 17:56] systemd-fstab-generator[4990]: Ignoring "noauto" option for root device
	[  +0.108824] kauditd_printk_skb: 137 callbacks suppressed
	[  +6.503684] kauditd_printk_skb: 52 callbacks suppressed
	[ +11.988991] systemd-fstab-generator[5884]: Ignoring "noauto" option for root device
	[  +0.145733] kauditd_printk_skb: 31 callbacks suppressed
	[Aug 7 17:59] systemd-fstab-generator[7536]: Ignoring "noauto" option for root device
	[  +0.151220] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.513441] systemd-fstab-generator[7585]: Ignoring "noauto" option for root device
	[  +0.282699] systemd-fstab-generator[7598]: Ignoring "noauto" option for root device
	[  +0.340219] systemd-fstab-generator[7612]: Ignoring "noauto" option for root device
	[  +5.344642] kauditd_printk_skb: 89 callbacks suppressed
	
	
	==> kernel <==
	 18:04:06 up 12 min,  0 users,  load average: 0.00, 0.16, 0.17
	Linux functional-100700 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Aug 07 18:03:59 functional-100700 kubelet[4998]: E0807 18:03:59.603839    4998 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-100700\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-100700?timeout=10s\": dial tcp 172.28.235.211:8441: connect: connection refused"
	Aug 07 18:03:59 functional-100700 kubelet[4998]: E0807 18:03:59.603974    4998 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	Aug 07 18:04:00 functional-100700 kubelet[4998]: E0807 18:04:00.639125    4998 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-100700?timeout=10s\": dial tcp 172.28.235.211:8441: connect: connection refused" interval="7s"
	Aug 07 18:04:01 functional-100700 kubelet[4998]: E0807 18:04:01.750630    4998 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 07 18:04:01 functional-100700 kubelet[4998]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 07 18:04:01 functional-100700 kubelet[4998]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 07 18:04:01 functional-100700 kubelet[4998]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 07 18:04:01 functional-100700 kubelet[4998]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 07 18:04:04 functional-100700 kubelet[4998]: E0807 18:04:04.074571    4998 kubelet.go:2370] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 4m10.817519815s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	Aug 07 18:04:05 functional-100700 kubelet[4998]: E0807 18:04:05.862150    4998 kubelet.go:2919] "Container runtime not ready" runtimeReady="RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/version\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:04:05 functional-100700 kubelet[4998]: E0807 18:04:05.862412    4998 remote_runtime.go:294] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Aug 07 18:04:05 functional-100700 kubelet[4998]: E0807 18:04:05.862465    4998 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:04:05 functional-100700 kubelet[4998]: E0807 18:04:05.862501    4998 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:04:05 functional-100700 kubelet[4998]: E0807 18:04:05.862553    4998 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Aug 07 18:04:05 functional-100700 kubelet[4998]: E0807 18:04:05.862584    4998 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:04:05 functional-100700 kubelet[4998]: E0807 18:04:05.862898    4998 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Aug 07 18:04:05 functional-100700 kubelet[4998]: E0807 18:04:05.863371    4998 container_log_manager.go:194] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:04:05 functional-100700 kubelet[4998]: E0807 18:04:05.864423    4998 remote_image.go:232] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:04:05 functional-100700 kubelet[4998]: E0807 18:04:05.864485    4998 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:04:05 functional-100700 kubelet[4998]: E0807 18:04:05.864535    4998 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Aug 07 18:04:05 functional-100700 kubelet[4998]: E0807 18:04:05.864558    4998 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Aug 07 18:04:05 functional-100700 kubelet[4998]: E0807 18:04:05.865276    4998 kubelet.go:1436] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	Aug 07 18:04:05 functional-100700 kubelet[4998]: E0807 18:04:05.865421    4998 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Aug 07 18:04:05 functional-100700 kubelet[4998]: E0807 18:04:05.865512    4998 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:04:05 functional-100700 kubelet[4998]: I0807 18:04:05.865529    4998 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0807 18:01:17.910648   10488 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0807 18:02:05.372651   10488 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0807 18:02:05.407209   10488 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0807 18:02:05.437370   10488 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0807 18:02:05.469937   10488 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0807 18:02:05.500403   10488 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0807 18:02:05.530506   10488 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0807 18:02:05.561498   10488 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0807 18:03:05.646884   10488 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-100700 -n functional-100700
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-100700 -n functional-100700: exit status 2 (12.2447816s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0807 18:04:06.667282    8976 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-100700" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (345.73s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (180.41s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-100700 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:806: (dbg) Non-zero exit: kubectl --context functional-100700 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (2.1798188s)

                                                
                                                
-- stdout --
	{
	    "apiVersion": "v1",
	    "items": [],
	    "kind": "List",
	    "metadata": {
	        "resourceVersion": ""
	    }
	}

                                                
                                                
-- /stdout --
** stderr ** 
	Unable to connect to the server: dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.

                                                
                                                
** /stderr **
functional_test.go:808: failed to get components. args "kubectl --context functional-100700 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-100700 -n functional-100700
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-100700 -n functional-100700: exit status 2 (11.9382376s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0807 18:04:21.101192    2900 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-100700 logs -n 25
E0807 18:04:43.629643    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\client.crt: The system cannot find the path specified.
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-100700 logs -n 25: (2m33.9184685s)
helpers_test.go:252: TestFunctional/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| unpause | nospam-974300 --log_dir                                                  | nospam-974300     | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:48 UTC | 07 Aug 24 17:48 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-974300              |                   |                   |         |                     |                     |
	|         | unpause                                                                  |                   |                   |         |                     |                     |
	| unpause | nospam-974300 --log_dir                                                  | nospam-974300     | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:48 UTC | 07 Aug 24 17:48 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-974300              |                   |                   |         |                     |                     |
	|         | unpause                                                                  |                   |                   |         |                     |                     |
	| unpause | nospam-974300 --log_dir                                                  | nospam-974300     | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:48 UTC | 07 Aug 24 17:48 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-974300              |                   |                   |         |                     |                     |
	|         | unpause                                                                  |                   |                   |         |                     |                     |
	| stop    | nospam-974300 --log_dir                                                  | nospam-974300     | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:48 UTC | 07 Aug 24 17:49 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-974300              |                   |                   |         |                     |                     |
	|         | stop                                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-974300 --log_dir                                                  | nospam-974300     | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:49 UTC | 07 Aug 24 17:49 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-974300              |                   |                   |         |                     |                     |
	|         | stop                                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-974300 --log_dir                                                  | nospam-974300     | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:49 UTC | 07 Aug 24 17:49 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-974300              |                   |                   |         |                     |                     |
	|         | stop                                                                     |                   |                   |         |                     |                     |
	| delete  | -p nospam-974300                                                         | nospam-974300     | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:49 UTC | 07 Aug 24 17:50 UTC |
	| start   | -p functional-100700                                                     | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:50 UTC | 07 Aug 24 17:54 UTC |
	|         | --memory=4000                                                            |                   |                   |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                   |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv                                               |                   |                   |         |                     |                     |
	| start   | -p functional-100700                                                     | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:54 UTC | 07 Aug 24 17:56 UTC |
	|         | --alsologtostderr -v=8                                                   |                   |                   |         |                     |                     |
	| cache   | functional-100700 cache add                                              | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:56 UTC | 07 Aug 24 17:56 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |                   |         |                     |                     |
	| cache   | functional-100700 cache add                                              | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:56 UTC | 07 Aug 24 17:56 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |                   |         |                     |                     |
	| cache   | functional-100700 cache add                                              | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:56 UTC | 07 Aug 24 17:56 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| cache   | functional-100700 cache add                                              | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:57 UTC | 07 Aug 24 17:57 UTC |
	|         | minikube-local-cache-test:functional-100700                              |                   |                   |         |                     |                     |
	| cache   | functional-100700 cache delete                                           | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:57 UTC | 07 Aug 24 17:57 UTC |
	|         | minikube-local-cache-test:functional-100700                              |                   |                   |         |                     |                     |
	| cache   | delete                                                                   | minikube          | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:57 UTC | 07 Aug 24 17:57 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |                   |         |                     |                     |
	| cache   | list                                                                     | minikube          | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:57 UTC | 07 Aug 24 17:57 UTC |
	| ssh     | functional-100700 ssh sudo                                               | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:57 UTC | 07 Aug 24 17:57 UTC |
	|         | crictl images                                                            |                   |                   |         |                     |                     |
	| ssh     | functional-100700                                                        | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:57 UTC | 07 Aug 24 17:57 UTC |
	|         | ssh sudo docker rmi                                                      |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| ssh     | functional-100700 ssh                                                    | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:57 UTC |                     |
	|         | sudo crictl inspecti                                                     |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| cache   | functional-100700 cache reload                                           | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:57 UTC | 07 Aug 24 17:57 UTC |
	| ssh     | functional-100700 ssh                                                    | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:57 UTC | 07 Aug 24 17:57 UTC |
	|         | sudo crictl inspecti                                                     |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| cache   | delete                                                                   | minikube          | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:57 UTC | 07 Aug 24 17:57 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |                   |         |                     |                     |
	| cache   | delete                                                                   | minikube          | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:57 UTC | 07 Aug 24 17:57 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| kubectl | functional-100700 kubectl --                                             | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:57 UTC | 07 Aug 24 17:57 UTC |
	|         | --context functional-100700                                              |                   |                   |         |                     |                     |
	|         | get pods                                                                 |                   |                   |         |                     |                     |
	| start   | -p functional-100700                                                     | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:58 UTC |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                   |                   |         |                     |                     |
	|         | --wait=all                                                               |                   |                   |         |                     |                     |
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/07 17:58:33
	Running on machine: minikube6
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0807 17:58:33.249534    2092 out.go:291] Setting OutFile to fd 728 ...
	I0807 17:58:33.250111    2092 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 17:58:33.250111    2092 out.go:304] Setting ErrFile to fd 800...
	I0807 17:58:33.250179    2092 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 17:58:33.269540    2092 out.go:298] Setting JSON to false
	I0807 17:58:33.272574    2092 start.go:129] hostinfo: {"hostname":"minikube6","uptime":315442,"bootTime":1722738070,"procs":188,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4717 Build 19045.4717","kernelVersion":"10.0.19045.4717 Build 19045.4717","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0807 17:58:33.272574    2092 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0807 17:58:33.277577    2092 out.go:177] * [functional-100700] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4717 Build 19045.4717
	I0807 17:58:33.281028    2092 notify.go:220] Checking for updates...
	I0807 17:58:33.284043    2092 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0807 17:58:33.286477    2092 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0807 17:58:33.289594    2092 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0807 17:58:33.292627    2092 out.go:177]   - MINIKUBE_LOCATION=19389
	I0807 17:58:33.295302    2092 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0807 17:58:33.298825    2092 config.go:182] Loaded profile config "functional-100700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 17:58:33.298825    2092 driver.go:392] Setting default libvirt URI to qemu:///system
	I0807 17:58:38.714558    2092 out.go:177] * Using the hyperv driver based on existing profile
	I0807 17:58:38.718761    2092 start.go:297] selected driver: hyperv
	I0807 17:58:38.718761    2092 start.go:901] validating driver "hyperv" against &{Name:functional-100700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.3 ClusterName:functional-100700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.235.211 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 17:58:38.718761    2092 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0807 17:58:38.771046    2092 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0807 17:58:38.771115    2092 cni.go:84] Creating CNI manager for ""
	I0807 17:58:38.771115    2092 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0807 17:58:38.771254    2092 start.go:340] cluster config:
	{Name:functional-100700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-100700 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.235.211 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 17:58:38.771533    2092 iso.go:125] acquiring lock: {Name:mk51465eaa337f49a286b30986b5f3d5f63e6787 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 17:58:38.776902    2092 out.go:177] * Starting "functional-100700" primary control-plane node in "functional-100700" cluster
	I0807 17:58:38.780865    2092 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0807 17:58:38.780865    2092 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0807 17:58:38.780865    2092 cache.go:56] Caching tarball of preloaded images
	I0807 17:58:38.780865    2092 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0807 17:58:38.780865    2092 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0807 17:58:38.781934    2092 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-100700\config.json ...
	I0807 17:58:38.783866    2092 start.go:360] acquireMachinesLock for functional-100700: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 17:58:38.783866    2092 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-100700"
	I0807 17:58:38.783866    2092 start.go:96] Skipping create...Using existing machine configuration
	I0807 17:58:38.783866    2092 fix.go:54] fixHost starting: 
	I0807 17:58:38.784885    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:58:41.603969    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:58:41.604807    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:58:41.604807    2092 fix.go:112] recreateIfNeeded on functional-100700: state=Running err=<nil>
	W0807 17:58:41.604807    2092 fix.go:138] unexpected machine state, will restart: <nil>
	I0807 17:58:41.608533    2092 out.go:177] * Updating the running hyperv "functional-100700" VM ...
	I0807 17:58:41.613016    2092 machine.go:94] provisionDockerMachine start ...
	I0807 17:58:41.613016    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:58:43.839989    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:58:43.839989    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:58:43.840252    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:58:46.452954    2092 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:58:46.452954    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:58:46.459781    2092 main.go:141] libmachine: Using SSH client type: native
	I0807 17:58:46.460474    2092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.235.211 22 <nil> <nil>}
	I0807 17:58:46.460474    2092 main.go:141] libmachine: About to run SSH command:
	hostname
	I0807 17:58:46.591805    2092 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-100700
	
	I0807 17:58:46.591805    2092 buildroot.go:166] provisioning hostname "functional-100700"
	I0807 17:58:46.591805    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:58:48.755211    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:58:48.755427    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:58:48.755465    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:58:51.336039    2092 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:58:51.336039    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:58:51.342623    2092 main.go:141] libmachine: Using SSH client type: native
	I0807 17:58:51.342623    2092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.235.211 22 <nil> <nil>}
	I0807 17:58:51.342623    2092 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-100700 && echo "functional-100700" | sudo tee /etc/hostname
	I0807 17:58:51.496578    2092 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-100700
	
	I0807 17:58:51.496578    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:58:53.691439    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:58:53.691439    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:58:53.691439    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:58:56.301964    2092 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:58:56.301964    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:58:56.307512    2092 main.go:141] libmachine: Using SSH client type: native
	I0807 17:58:56.308194    2092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.235.211 22 <nil> <nil>}
	I0807 17:58:56.308194    2092 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-100700' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-100700/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-100700' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0807 17:58:56.438766    2092 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0807 17:58:56.438766    2092 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0807 17:58:56.438900    2092 buildroot.go:174] setting up certificates
	I0807 17:58:56.438900    2092 provision.go:84] configureAuth start
	I0807 17:58:56.438900    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:58:58.655995    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:58:58.656961    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:58:58.657071    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:59:01.290427    2092 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:59:01.290427    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:01.290831    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:59:03.469316    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:59:03.469316    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:03.469551    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:59:06.075723    2092 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:59:06.075723    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:06.075723    2092 provision.go:143] copyHostCerts
	I0807 17:59:06.075723    2092 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0807 17:59:06.075723    2092 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0807 17:59:06.076549    2092 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0807 17:59:06.077992    2092 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0807 17:59:06.077992    2092 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0807 17:59:06.078322    2092 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0807 17:59:06.079146    2092 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0807 17:59:06.079146    2092 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0807 17:59:06.079980    2092 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0807 17:59:06.080688    2092 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-100700 san=[127.0.0.1 172.28.235.211 functional-100700 localhost minikube]
	I0807 17:59:06.262311    2092 provision.go:177] copyRemoteCerts
	I0807 17:59:06.274334    2092 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0807 17:59:06.274334    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:59:08.466099    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:59:08.466421    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:08.466494    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:59:11.061934    2092 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:59:11.061934    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:11.061934    2092 sshutil.go:53] new ssh client: &{IP:172.28.235.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-100700\id_rsa Username:docker}
	I0807 17:59:11.172314    2092 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8979173s)
	I0807 17:59:11.172848    2092 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0807 17:59:11.223362    2092 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0807 17:59:11.271809    2092 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0807 17:59:11.319487    2092 provision.go:87] duration metric: took 14.8803963s to configureAuth
	I0807 17:59:11.319487    2092 buildroot.go:189] setting minikube options for container-runtime
	I0807 17:59:11.320542    2092 config.go:182] Loaded profile config "functional-100700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 17:59:11.320588    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:59:13.493491    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:59:13.493491    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:13.493879    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:59:16.077977    2092 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:59:16.077977    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:16.088668    2092 main.go:141] libmachine: Using SSH client type: native
	I0807 17:59:16.088783    2092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.235.211 22 <nil> <nil>}
	I0807 17:59:16.088783    2092 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0807 17:59:16.217785    2092 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0807 17:59:16.217785    2092 buildroot.go:70] root file system type: tmpfs
	I0807 17:59:16.218443    2092 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0807 17:59:16.218443    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:59:18.421400    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:59:18.421400    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:18.421838    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:59:21.023576    2092 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:59:21.024581    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:21.030466    2092 main.go:141] libmachine: Using SSH client type: native
	I0807 17:59:21.031160    2092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.235.211 22 <nil> <nil>}
	I0807 17:59:21.031160    2092 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0807 17:59:21.200213    2092 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0807 17:59:21.200853    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:59:23.460840    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:59:23.460840    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:23.461413    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:59:26.144922    2092 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:59:26.144922    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:26.151032    2092 main.go:141] libmachine: Using SSH client type: native
	I0807 17:59:26.151032    2092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.235.211 22 <nil> <nil>}
	I0807 17:59:26.151032    2092 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0807 17:59:26.288578    2092 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0807 17:59:26.288578    2092 machine.go:97] duration metric: took 44.6749905s to provisionDockerMachine
	I0807 17:59:26.289136    2092 start.go:293] postStartSetup for "functional-100700" (driver="hyperv")
	I0807 17:59:26.289136    2092 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0807 17:59:26.303659    2092 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0807 17:59:26.303659    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:59:28.549324    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:59:28.550453    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:28.550627    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:59:31.258516    2092 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:59:31.258516    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:31.259399    2092 sshutil.go:53] new ssh client: &{IP:172.28.235.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-100700\id_rsa Username:docker}
	I0807 17:59:31.366436    2092 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.0626314s)
	I0807 17:59:31.378993    2092 ssh_runner.go:195] Run: cat /etc/os-release
	I0807 17:59:31.386284    2092 info.go:137] Remote host: Buildroot 2023.02.9
	I0807 17:59:31.386284    2092 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0807 17:59:31.386889    2092 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0807 17:59:31.387933    2092 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96602.pem -> 96602.pem in /etc/ssl/certs
	I0807 17:59:31.388907    2092 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\9660\hosts -> hosts in /etc/test/nested/copy/9660
	I0807 17:59:31.400272    2092 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/9660
	I0807 17:59:31.419285    2092 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96602.pem --> /etc/ssl/certs/96602.pem (1708 bytes)
	I0807 17:59:31.469876    2092 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\9660\hosts --> /etc/test/nested/copy/9660/hosts (40 bytes)
	I0807 17:59:31.522816    2092 start.go:296] duration metric: took 5.2336131s for postStartSetup
	I0807 17:59:31.522964    2092 fix.go:56] duration metric: took 52.7384232s for fixHost
	I0807 17:59:31.522964    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:59:33.801714    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:59:33.801714    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:33.802069    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:59:36.454493    2092 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:59:36.455616    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:36.460762    2092 main.go:141] libmachine: Using SSH client type: native
	I0807 17:59:36.461590    2092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.235.211 22 <nil> <nil>}
	I0807 17:59:36.461590    2092 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0807 17:59:36.584817    2092 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723053576.599616702
	
	I0807 17:59:36.584817    2092 fix.go:216] guest clock: 1723053576.599616702
	I0807 17:59:36.584817    2092 fix.go:229] Guest: 2024-08-07 17:59:36.599616702 +0000 UTC Remote: 2024-08-07 17:59:31.5229646 +0000 UTC m=+58.443653901 (delta=5.076652102s)
	I0807 17:59:36.584817    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:59:38.789376    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:59:38.789376    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:38.789376    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:59:41.408403    2092 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:59:41.408403    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:41.415021    2092 main.go:141] libmachine: Using SSH client type: native
	I0807 17:59:41.415132    2092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.235.211 22 <nil> <nil>}
	I0807 17:59:41.415132    2092 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1723053576
	I0807 17:59:41.554262    2092 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Aug  7 17:59:36 UTC 2024
	
	I0807 17:59:41.554342    2092 fix.go:236] clock set: Wed Aug  7 17:59:36 UTC 2024
	 (err=<nil>)
	I0807 17:59:41.554342    2092 start.go:83] releasing machines lock for "functional-100700", held for 1m2.7696728s
	I0807 17:59:41.554664    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:59:43.743629    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:59:43.743629    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:43.743690    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:59:46.402358    2092 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:59:46.402358    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:46.408259    2092 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0807 17:59:46.408354    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:59:46.418508    2092 ssh_runner.go:195] Run: cat /version.json
	I0807 17:59:46.418508    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:59:48.678664    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:59:48.678664    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:48.678947    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:59:48.679407    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:59:48.679407    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:48.679407    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:59:51.480814    2092 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:59:51.481012    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:51.481427    2092 sshutil.go:53] new ssh client: &{IP:172.28.235.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-100700\id_rsa Username:docker}
	I0807 17:59:51.507337    2092 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:59:51.507337    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:51.508062    2092 sshutil.go:53] new ssh client: &{IP:172.28.235.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-100700\id_rsa Username:docker}
	I0807 17:59:51.568433    2092 ssh_runner.go:235] Completed: cat /version.json: (5.149744s)
	I0807 17:59:51.580326    2092 ssh_runner.go:195] Run: systemctl --version
	I0807 17:59:51.588096    2092 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.1797709s)
	W0807 17:59:51.588200    2092 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0807 17:59:51.605332    2092 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0807 17:59:51.614105    2092 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0807 17:59:51.625622    2092 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0807 17:59:51.647469    2092 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0807 17:59:51.647469    2092 start.go:495] detecting cgroup driver to use...
	I0807 17:59:51.647469    2092 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0807 17:59:51.698634    2092 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	W0807 17:59:51.702217    2092 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0807 17:59:51.702712    2092 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0807 17:59:51.742110    2092 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0807 17:59:51.763294    2092 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0807 17:59:51.777226    2092 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0807 17:59:51.810899    2092 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0807 17:59:51.842846    2092 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0807 17:59:51.874465    2092 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0807 17:59:51.906374    2092 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0807 17:59:51.940856    2092 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0807 17:59:51.972522    2092 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0807 17:59:52.005392    2092 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0807 17:59:52.039394    2092 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0807 17:59:52.069956    2092 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0807 17:59:52.100774    2092 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 17:59:52.376248    2092 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0807 17:59:52.411639    2092 start.go:495] detecting cgroup driver to use...
	I0807 17:59:52.424848    2092 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0807 17:59:52.465672    2092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0807 17:59:52.507105    2092 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0807 17:59:52.559294    2092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0807 17:59:52.602621    2092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0807 17:59:52.628877    2092 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0807 17:59:52.677947    2092 ssh_runner.go:195] Run: which cri-dockerd
	I0807 17:59:52.696445    2092 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0807 17:59:52.713779    2092 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0807 17:59:52.759506    2092 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0807 17:59:53.063312    2092 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0807 17:59:53.341833    2092 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0807 17:59:53.341833    2092 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0807 17:59:53.390184    2092 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 17:59:53.669002    2092 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0807 18:01:05.110860    2092 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.440852s)
	I0807 18:01:05.123373    2092 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0807 18:01:05.210998    2092 out.go:177] 
	W0807 18:01:05.214928    2092 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 07 17:52:16 functional-100700 systemd[1]: Starting Docker Application Container Engine...
	Aug 07 17:52:16 functional-100700 dockerd[674]: time="2024-08-07T17:52:16.458341985Z" level=info msg="Starting up"
	Aug 07 17:52:16 functional-100700 dockerd[674]: time="2024-08-07T17:52:16.459483937Z" level=info msg="containerd not running, starting managed containerd"
	Aug 07 17:52:16 functional-100700 dockerd[674]: time="2024-08-07T17:52:16.460719594Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=680
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.493113277Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.523238457Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.523275259Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.523339562Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.523356263Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.523427766Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.523446067Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.523804083Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.523901688Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.523925089Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.523938689Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.524034194Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.524376109Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.527352746Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.527600157Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.528068478Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.528219485Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.528416294Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.528629904Z" level=info msg="metadata content store policy set" policy=shared
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.581871643Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.581973248Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.582080552Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.582105454Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.582123954Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.582283562Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.582887889Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583040296Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583147301Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583169102Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583185303Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583200004Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583228805Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583248006Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583336710Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583450015Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583471716Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583486017Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583527319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583544019Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583560020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583595322Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583708227Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583728328Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583743029Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583769930Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583818932Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583858834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583890635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583921137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583935837Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583972939Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.584001540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.584017041Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.584038742Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.584125046Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.584167548Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.584182949Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.584202250Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.584215250Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.584231651Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.584243051Z" level=info msg="NRI interface is disabled by configuration."
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.584478362Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.584610368Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.584865080Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.585009287Z" level=info msg="containerd successfully booted in 0.093145s"
	Aug 07 17:52:17 functional-100700 dockerd[674]: time="2024-08-07T17:52:17.539100050Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 07 17:52:17 functional-100700 dockerd[674]: time="2024-08-07T17:52:17.582623719Z" level=info msg="Loading containers: start."
	Aug 07 17:52:17 functional-100700 dockerd[674]: time="2024-08-07T17:52:17.757771440Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 07 17:52:17 functional-100700 dockerd[674]: time="2024-08-07T17:52:17.984107768Z" level=info msg="Loading containers: done."
	Aug 07 17:52:18 functional-100700 dockerd[674]: time="2024-08-07T17:52:18.005649861Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Aug 07 17:52:18 functional-100700 dockerd[674]: time="2024-08-07T17:52:18.006524396Z" level=info msg="Daemon has completed initialization"
	Aug 07 17:52:18 functional-100700 dockerd[674]: time="2024-08-07T17:52:18.114597281Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 07 17:52:18 functional-100700 dockerd[674]: time="2024-08-07T17:52:18.114734986Z" level=info msg="API listen on [::]:2376"
	Aug 07 17:52:18 functional-100700 systemd[1]: Started Docker Application Container Engine.
	Aug 07 17:52:49 functional-100700 systemd[1]: Stopping Docker Application Container Engine...
	Aug 07 17:52:49 functional-100700 dockerd[674]: time="2024-08-07T17:52:49.863488345Z" level=info msg="Processing signal 'terminated'"
	Aug 07 17:52:49 functional-100700 dockerd[674]: time="2024-08-07T17:52:49.866317260Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 07 17:52:49 functional-100700 dockerd[674]: time="2024-08-07T17:52:49.866749062Z" level=info msg="Daemon shutdown complete"
	Aug 07 17:52:49 functional-100700 dockerd[674]: time="2024-08-07T17:52:49.867048363Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 07 17:52:49 functional-100700 dockerd[674]: time="2024-08-07T17:52:49.867142864Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 07 17:52:50 functional-100700 systemd[1]: docker.service: Deactivated successfully.
	Aug 07 17:52:50 functional-100700 systemd[1]: Stopped Docker Application Container Engine.
	Aug 07 17:52:50 functional-100700 systemd[1]: Starting Docker Application Container Engine...
	Aug 07 17:52:50 functional-100700 dockerd[1088]: time="2024-08-07T17:52:50.924908641Z" level=info msg="Starting up"
	Aug 07 17:52:50 functional-100700 dockerd[1088]: time="2024-08-07T17:52:50.926025447Z" level=info msg="containerd not running, starting managed containerd"
	Aug 07 17:52:50 functional-100700 dockerd[1088]: time="2024-08-07T17:52:50.927064452Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1094
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.958194110Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.986326653Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.986364954Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.986401554Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.986436154Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.986479654Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.986495054Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.986720855Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.986821556Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.986844356Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.986855856Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.986880556Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.987134958Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.990330074Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.990438474Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.990847676Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.990948477Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.991014577Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.991067378Z" level=info msg="metadata content store policy set" policy=shared
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.991319879Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.991378979Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.991397879Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.991412779Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.991428979Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.991496580Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.992185983Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.992409884Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.992647286Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.992672686Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.992688386Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.992794286Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993149888Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993243389Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993271489Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993292489Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993307089Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993318789Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993338389Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993353889Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993377989Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993393189Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993409289Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993422089Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993433690Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993445490Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993457890Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993471990Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993490190Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993561890Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993582490Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993597890Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993619090Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993632991Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993644091Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993764891Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993878492Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993897692Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993910692Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.994016892Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.994112293Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.994155593Z" level=info msg="NRI interface is disabled by configuration."
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.994503995Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.994761996Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.994864197Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.995059098Z" level=info msg="containerd successfully booted in 0.037887s"
	Aug 07 17:52:51 functional-100700 dockerd[1088]: time="2024-08-07T17:52:51.976962789Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 07 17:52:52 functional-100700 dockerd[1088]: time="2024-08-07T17:52:52.014319979Z" level=info msg="Loading containers: start."
	Aug 07 17:52:52 functional-100700 dockerd[1088]: time="2024-08-07T17:52:52.153625988Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 07 17:52:52 functional-100700 dockerd[1088]: time="2024-08-07T17:52:52.280398732Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 07 17:52:52 functional-100700 dockerd[1088]: time="2024-08-07T17:52:52.383509956Z" level=info msg="Loading containers: done."
	Aug 07 17:52:52 functional-100700 dockerd[1088]: time="2024-08-07T17:52:52.407173376Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Aug 07 17:52:52 functional-100700 dockerd[1088]: time="2024-08-07T17:52:52.407304177Z" level=info msg="Daemon has completed initialization"
	Aug 07 17:52:52 functional-100700 dockerd[1088]: time="2024-08-07T17:52:52.460329447Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 07 17:52:52 functional-100700 dockerd[1088]: time="2024-08-07T17:52:52.460475947Z" level=info msg="API listen on [::]:2376"
	Aug 07 17:52:52 functional-100700 systemd[1]: Started Docker Application Container Engine.
	Aug 07 17:53:01 functional-100700 dockerd[1088]: time="2024-08-07T17:53:01.251394538Z" level=info msg="Processing signal 'terminated'"
	Aug 07 17:53:01 functional-100700 dockerd[1088]: time="2024-08-07T17:53:01.254204052Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 07 17:53:01 functional-100700 systemd[1]: Stopping Docker Application Container Engine...
	Aug 07 17:53:01 functional-100700 dockerd[1088]: time="2024-08-07T17:53:01.260155282Z" level=info msg="Daemon shutdown complete"
	Aug 07 17:53:01 functional-100700 dockerd[1088]: time="2024-08-07T17:53:01.260456984Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 07 17:53:01 functional-100700 dockerd[1088]: time="2024-08-07T17:53:01.260937686Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 07 17:53:02 functional-100700 systemd[1]: docker.service: Deactivated successfully.
	Aug 07 17:53:02 functional-100700 systemd[1]: Stopped Docker Application Container Engine.
	Aug 07 17:53:02 functional-100700 systemd[1]: Starting Docker Application Container Engine...
	Aug 07 17:53:02 functional-100700 dockerd[1438]: time="2024-08-07T17:53:02.321797079Z" level=info msg="Starting up"
	Aug 07 17:53:02 functional-100700 dockerd[1438]: time="2024-08-07T17:53:02.323692689Z" level=info msg="containerd not running, starting managed containerd"
	Aug 07 17:53:02 functional-100700 dockerd[1438]: time="2024-08-07T17:53:02.324967095Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1444
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.356801457Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.391549934Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.391620134Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.391684835Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.391704135Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.391750835Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.391768635Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.391946636Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.392117037Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.392142737Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.392156437Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.392185437Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.392310838Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.395280253Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.395439954Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.395604655Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.395701255Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.395733555Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.396053557Z" level=info msg="metadata content store policy set" policy=shared
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.396602860Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.396804261Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.396886161Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.396963461Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397040262Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397105662Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397382264Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397664265Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397760665Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397781966Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397796766Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397810066Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397829466Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397849466Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397864866Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397877866Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397890366Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397902266Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397930866Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398086167Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398131667Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398146767Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398159868Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398173168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398186368Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398199368Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398230968Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398291368Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398305068Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398318768Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398347068Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398379069Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398633370Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398774671Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398795271Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398837271Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.399058072Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.399114872Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.399134672Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.399145573Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.399188373Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.399202473Z" level=info msg="NRI interface is disabled by configuration."
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.399579475Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.399779376Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.399959877Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.400151978Z" level=info msg="containerd successfully booted in 0.045445s"
	Aug 07 17:53:03 functional-100700 dockerd[1438]: time="2024-08-07T17:53:03.371421015Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 07 17:53:06 functional-100700 dockerd[1438]: time="2024-08-07T17:53:06.638419724Z" level=info msg="Loading containers: start."
	Aug 07 17:53:06 functional-100700 dockerd[1438]: time="2024-08-07T17:53:06.762102252Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 07 17:53:06 functional-100700 dockerd[1438]: time="2024-08-07T17:53:06.878637045Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 07 17:53:06 functional-100700 dockerd[1438]: time="2024-08-07T17:53:06.979158756Z" level=info msg="Loading containers: done."
	Aug 07 17:53:07 functional-100700 dockerd[1438]: time="2024-08-07T17:53:07.006779696Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Aug 07 17:53:07 functional-100700 dockerd[1438]: time="2024-08-07T17:53:07.006939697Z" level=info msg="Daemon has completed initialization"
	Aug 07 17:53:07 functional-100700 dockerd[1438]: time="2024-08-07T17:53:07.050899521Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 07 17:53:07 functional-100700 dockerd[1438]: time="2024-08-07T17:53:07.051782725Z" level=info msg="API listen on [::]:2376"
	Aug 07 17:53:07 functional-100700 systemd[1]: Started Docker Application Container Engine.
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.457114123Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.457756947Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.457814849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.458624579Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.566929844Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.567055349Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.567075749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.567173853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.642620715Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.643094533Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.643146835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.647326788Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.647829406Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.647978912Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.648649436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.651222930Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.841899112Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.842260225Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.842357529Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.842730042Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.987249334Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.987542945Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.987581346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.987882057Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:17 functional-100700 dockerd[1444]: time="2024-08-07T17:53:17.071713287Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:17 functional-100700 dockerd[1444]: time="2024-08-07T17:53:17.072501915Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:17 functional-100700 dockerd[1444]: time="2024-08-07T17:53:17.072657920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:17 functional-100700 dockerd[1444]: time="2024-08-07T17:53:17.072778524Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:17 functional-100700 dockerd[1444]: time="2024-08-07T17:53:17.120557979Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:17 functional-100700 dockerd[1444]: time="2024-08-07T17:53:17.120838189Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:17 functional-100700 dockerd[1444]: time="2024-08-07T17:53:17.121035196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:17 functional-100700 dockerd[1444]: time="2024-08-07T17:53:17.121432210Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:39 functional-100700 dockerd[1444]: time="2024-08-07T17:53:39.836342825Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:39 functional-100700 dockerd[1444]: time="2024-08-07T17:53:39.836494127Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:39 functional-100700 dockerd[1444]: time="2024-08-07T17:53:39.836527028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:39 functional-100700 dockerd[1444]: time="2024-08-07T17:53:39.836919632Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:40 functional-100700 dockerd[1444]: time="2024-08-07T17:53:40.031505099Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:40 functional-100700 dockerd[1444]: time="2024-08-07T17:53:40.031971604Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:40 functional-100700 dockerd[1444]: time="2024-08-07T17:53:40.032036705Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:40 functional-100700 dockerd[1444]: time="2024-08-07T17:53:40.032230607Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:40 functional-100700 dockerd[1444]: time="2024-08-07T17:53:40.071740773Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:40 functional-100700 dockerd[1444]: time="2024-08-07T17:53:40.071807874Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:40 functional-100700 dockerd[1444]: time="2024-08-07T17:53:40.071821974Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:40 functional-100700 dockerd[1444]: time="2024-08-07T17:53:40.072043276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:40 functional-100700 dockerd[1444]: time="2024-08-07T17:53:40.388937110Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:40 functional-100700 dockerd[1444]: time="2024-08-07T17:53:40.389400016Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:40 functional-100700 dockerd[1444]: time="2024-08-07T17:53:40.389566918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:40 functional-100700 dockerd[1444]: time="2024-08-07T17:53:40.390025223Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:41 functional-100700 dockerd[1444]: time="2024-08-07T17:53:41.011330360Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:41 functional-100700 dockerd[1444]: time="2024-08-07T17:53:41.011407583Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:41 functional-100700 dockerd[1444]: time="2024-08-07T17:53:41.013870604Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:41 functional-100700 dockerd[1444]: time="2024-08-07T17:53:41.017327916Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:41 functional-100700 dockerd[1444]: time="2024-08-07T17:53:41.053458090Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:41 functional-100700 dockerd[1444]: time="2024-08-07T17:53:41.053872712Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:41 functional-100700 dockerd[1444]: time="2024-08-07T17:53:41.054119484Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:41 functional-100700 dockerd[1444]: time="2024-08-07T17:53:41.054800983Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:48 functional-100700 dockerd[1444]: time="2024-08-07T17:53:48.064470635Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:48 functional-100700 dockerd[1444]: time="2024-08-07T17:53:48.064566761Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:48 functional-100700 dockerd[1444]: time="2024-08-07T17:53:48.064584666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:48 functional-100700 dockerd[1444]: time="2024-08-07T17:53:48.064692294Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:48 functional-100700 dockerd[1444]: time="2024-08-07T17:53:48.363127500Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:48 functional-100700 dockerd[1444]: time="2024-08-07T17:53:48.363436282Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:48 functional-100700 dockerd[1444]: time="2024-08-07T17:53:48.363741263Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:48 functional-100700 dockerd[1444]: time="2024-08-07T17:53:48.364157974Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:51 functional-100700 dockerd[1438]: time="2024-08-07T17:53:51.247657321Z" level=info msg="ignoring event" container=9d5becd51e7186ec4acc242dcdf88a4327987ec048e0ee52430eac221c2fc0fe module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:53:51 functional-100700 dockerd[1444]: time="2024-08-07T17:53:51.250057633Z" level=info msg="shim disconnected" id=9d5becd51e7186ec4acc242dcdf88a4327987ec048e0ee52430eac221c2fc0fe namespace=moby
	Aug 07 17:53:51 functional-100700 dockerd[1444]: time="2024-08-07T17:53:51.250177263Z" level=warning msg="cleaning up after shim disconnected" id=9d5becd51e7186ec4acc242dcdf88a4327987ec048e0ee52430eac221c2fc0fe namespace=moby
	Aug 07 17:53:51 functional-100700 dockerd[1444]: time="2024-08-07T17:53:51.250194468Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:53:51 functional-100700 dockerd[1438]: time="2024-08-07T17:53:51.423182591Z" level=info msg="ignoring event" container=2a2add9d4c7dacc6804bc6f6e21cf8821ae80a5ef75964e66784019654fe3bc3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:53:51 functional-100700 dockerd[1444]: time="2024-08-07T17:53:51.423885070Z" level=info msg="shim disconnected" id=2a2add9d4c7dacc6804bc6f6e21cf8821ae80a5ef75964e66784019654fe3bc3 namespace=moby
	Aug 07 17:53:51 functional-100700 dockerd[1444]: time="2024-08-07T17:53:51.424164141Z" level=warning msg="cleaning up after shim disconnected" id=2a2add9d4c7dacc6804bc6f6e21cf8821ae80a5ef75964e66784019654fe3bc3 namespace=moby
	Aug 07 17:53:51 functional-100700 dockerd[1444]: time="2024-08-07T17:53:51.424226557Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:42 functional-100700 systemd[1]: Stopping Docker Application Container Engine...
	Aug 07 17:55:42 functional-100700 dockerd[1438]: time="2024-08-07T17:55:42.811970533Z" level=info msg="Processing signal 'terminated'"
	Aug 07 17:55:43 functional-100700 dockerd[1438]: time="2024-08-07T17:55:43.080390772Z" level=info msg="ignoring event" container=f87ac0281bc2649782f39abdff8151e0c016bf26b3182a3df5c8579e21fe65d7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.081622815Z" level=info msg="shim disconnected" id=f87ac0281bc2649782f39abdff8151e0c016bf26b3182a3df5c8579e21fe65d7 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.082180634Z" level=warning msg="cleaning up after shim disconnected" id=f87ac0281bc2649782f39abdff8151e0c016bf26b3182a3df5c8579e21fe65d7 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.082393841Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1438]: time="2024-08-07T17:55:43.098416193Z" level=info msg="ignoring event" container=b9283200bae35965a0ee1c5549a6004ce0c7d70f70b9a374e41cd065d4d5dec3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.099709637Z" level=info msg="shim disconnected" id=b9283200bae35965a0ee1c5549a6004ce0c7d70f70b9a374e41cd065d4d5dec3 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.099819341Z" level=warning msg="cleaning up after shim disconnected" id=b9283200bae35965a0ee1c5549a6004ce0c7d70f70b9a374e41cd065d4d5dec3 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.099888243Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.121832799Z" level=info msg="shim disconnected" id=03079679d68cc19538e02f39c48251b50393f7dd981e609af3dcc96c420cd29e namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.121978304Z" level=warning msg="cleaning up after shim disconnected" id=03079679d68cc19538e02f39c48251b50393f7dd981e609af3dcc96c420cd29e namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.122200511Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.122413919Z" level=info msg="shim disconnected" id=1ca5873cb027b5d4205ca4cdff1109b4f47c0b8c5f18ae986a0b932c8d9584f4 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.122477321Z" level=warning msg="cleaning up after shim disconnected" id=1ca5873cb027b5d4205ca4cdff1109b4f47c0b8c5f18ae986a0b932c8d9584f4 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.122491521Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1438]: time="2024-08-07T17:55:43.122750830Z" level=info msg="ignoring event" container=1ca5873cb027b5d4205ca4cdff1109b4f47c0b8c5f18ae986a0b932c8d9584f4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:55:43 functional-100700 dockerd[1438]: time="2024-08-07T17:55:43.122822433Z" level=info msg="ignoring event" container=03079679d68cc19538e02f39c48251b50393f7dd981e609af3dcc96c420cd29e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.132577069Z" level=info msg="shim disconnected" id=9f7b90986285c55035e0f34c86f5eaccab75a819c487d2fb0ea04853921c13bf namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.132832477Z" level=warning msg="cleaning up after shim disconnected" id=9f7b90986285c55035e0f34c86f5eaccab75a819c487d2fb0ea04853921c13bf namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.132974882Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.138866285Z" level=info msg="shim disconnected" id=f907706c00ebe4149920447890732b438dd707eb66023282c7b66ac64e2185b5 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.139038791Z" level=warning msg="cleaning up after shim disconnected" id=f907706c00ebe4149920447890732b438dd707eb66023282c7b66ac64e2185b5 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.139187396Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.155113444Z" level=info msg="shim disconnected" id=a334b1535e2f7b257588b80636584105387ba212b623a89972b0a362d62c0504 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.155224248Z" level=warning msg="cleaning up after shim disconnected" id=a334b1535e2f7b257588b80636584105387ba212b623a89972b0a362d62c0504 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.155238049Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1438]: time="2024-08-07T17:55:43.155389654Z" level=info msg="ignoring event" container=f907706c00ebe4149920447890732b438dd707eb66023282c7b66ac64e2185b5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:55:43 functional-100700 dockerd[1438]: time="2024-08-07T17:55:43.155569360Z" level=info msg="ignoring event" container=9f7b90986285c55035e0f34c86f5eaccab75a819c487d2fb0ea04853921c13bf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:55:43 functional-100700 dockerd[1438]: time="2024-08-07T17:55:43.155886271Z" level=info msg="ignoring event" container=a334b1535e2f7b257588b80636584105387ba212b623a89972b0a362d62c0504 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.169713847Z" level=info msg="shim disconnected" id=32e3bea2a931545b3b3164d713e35af3f8439f208d9a2909552dc971a61ca84a namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.170314567Z" level=warning msg="cleaning up after shim disconnected" id=32e3bea2a931545b3b3164d713e35af3f8439f208d9a2909552dc971a61ca84a namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.170672780Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1438]: time="2024-08-07T17:55:43.183709228Z" level=info msg="ignoring event" container=32e3bea2a931545b3b3164d713e35af3f8439f208d9a2909552dc971a61ca84a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:55:43 functional-100700 dockerd[1438]: time="2024-08-07T17:55:43.186931639Z" level=info msg="ignoring event" container=8e6d65d222ddafeb197a6bdeeb312ce4cb757e9ed4da126acc744cc3b9f1550e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.187130746Z" level=info msg="shim disconnected" id=8e6d65d222ddafeb197a6bdeeb312ce4cb757e9ed4da126acc744cc3b9f1550e namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.187455057Z" level=warning msg="cleaning up after shim disconnected" id=8e6d65d222ddafeb197a6bdeeb312ce4cb757e9ed4da126acc744cc3b9f1550e namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.187626563Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.203552411Z" level=info msg="shim disconnected" id=6f09e3713754243813d9c0717cdb77e8bedfc4c511d31490f7ba369a8d1bcc06 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.204052129Z" level=warning msg="cleaning up after shim disconnected" id=6f09e3713754243813d9c0717cdb77e8bedfc4c511d31490f7ba369a8d1bcc06 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.204312938Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1438]: time="2024-08-07T17:55:43.210829062Z" level=info msg="ignoring event" container=88ef6e03a7d49636d4ec885944dcc92b4c233c0740da2e8f511ce9078e817eb9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:55:43 functional-100700 dockerd[1438]: time="2024-08-07T17:55:43.210944966Z" level=info msg="ignoring event" container=6f09e3713754243813d9c0717cdb77e8bedfc4c511d31490f7ba369a8d1bcc06 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.210804861Z" level=info msg="shim disconnected" id=88ef6e03a7d49636d4ec885944dcc92b4c233c0740da2e8f511ce9078e817eb9 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.211582688Z" level=warning msg="cleaning up after shim disconnected" id=88ef6e03a7d49636d4ec885944dcc92b4c233c0740da2e8f511ce9078e817eb9 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.211710392Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.243702093Z" level=info msg="shim disconnected" id=4f7e1db775dc208f8832a04d1b684f4ab8f803d4f68869549effeeb024f11b10 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1438]: time="2024-08-07T17:55:43.244412118Z" level=info msg="ignoring event" container=4f7e1db775dc208f8832a04d1b684f4ab8f803d4f68869549effeeb024f11b10 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.248240650Z" level=warning msg="cleaning up after shim disconnected" id=4f7e1db775dc208f8832a04d1b684f4ab8f803d4f68869549effeeb024f11b10 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.248409455Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:47 functional-100700 dockerd[1438]: time="2024-08-07T17:55:47.969145341Z" level=info msg="ignoring event" container=8257548df8d0db4a3326d585a67cf2b31ec7c8b63a2fb41d0009d9c5fa124c3b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:55:47 functional-100700 dockerd[1444]: time="2024-08-07T17:55:47.970761397Z" level=info msg="shim disconnected" id=8257548df8d0db4a3326d585a67cf2b31ec7c8b63a2fb41d0009d9c5fa124c3b namespace=moby
	Aug 07 17:55:47 functional-100700 dockerd[1444]: time="2024-08-07T17:55:47.971219213Z" level=warning msg="cleaning up after shim disconnected" id=8257548df8d0db4a3326d585a67cf2b31ec7c8b63a2fb41d0009d9c5fa124c3b namespace=moby
	Aug 07 17:55:47 functional-100700 dockerd[1444]: time="2024-08-07T17:55:47.973093477Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:52 functional-100700 dockerd[1438]: time="2024-08-07T17:55:52.961783600Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=76120dfe1c32eb89c2bb01fc16ebaad087df9e517d03ed5169fb50db579cfe65
	Aug 07 17:55:53 functional-100700 dockerd[1444]: time="2024-08-07T17:55:53.004669561Z" level=info msg="shim disconnected" id=76120dfe1c32eb89c2bb01fc16ebaad087df9e517d03ed5169fb50db579cfe65 namespace=moby
	Aug 07 17:55:53 functional-100700 dockerd[1444]: time="2024-08-07T17:55:53.004855260Z" level=warning msg="cleaning up after shim disconnected" id=76120dfe1c32eb89c2bb01fc16ebaad087df9e517d03ed5169fb50db579cfe65 namespace=moby
	Aug 07 17:55:53 functional-100700 dockerd[1444]: time="2024-08-07T17:55:53.004935360Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:53 functional-100700 dockerd[1438]: time="2024-08-07T17:55:53.005290559Z" level=info msg="ignoring event" container=76120dfe1c32eb89c2bb01fc16ebaad087df9e517d03ed5169fb50db579cfe65 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:55:53 functional-100700 dockerd[1438]: time="2024-08-07T17:55:53.082366606Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 07 17:55:53 functional-100700 dockerd[1438]: time="2024-08-07T17:55:53.082484305Z" level=info msg="Daemon shutdown complete"
	Aug 07 17:55:53 functional-100700 dockerd[1438]: time="2024-08-07T17:55:53.082557905Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 07 17:55:53 functional-100700 dockerd[1438]: time="2024-08-07T17:55:53.082900804Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 07 17:55:54 functional-100700 systemd[1]: docker.service: Deactivated successfully.
	Aug 07 17:55:54 functional-100700 systemd[1]: Stopped Docker Application Container Engine.
	Aug 07 17:55:54 functional-100700 systemd[1]: docker.service: Consumed 6.104s CPU time.
	Aug 07 17:55:54 functional-100700 systemd[1]: Starting Docker Application Container Engine...
	Aug 07 17:55:54 functional-100700 dockerd[4430]: time="2024-08-07T17:55:54.151963549Z" level=info msg="Starting up"
	Aug 07 17:55:54 functional-100700 dockerd[4430]: time="2024-08-07T17:55:54.153307848Z" level=info msg="containerd not running, starting managed containerd"
	Aug 07 17:55:54 functional-100700 dockerd[4430]: time="2024-08-07T17:55:54.154476946Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=4437
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.189116214Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.217487588Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.217672488Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.217724888Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.217743588Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.217923788Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.218099587Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.218341687Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.218462487Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.218487087Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.218500687Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.218536087Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.218676087Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.221803184Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.221937684Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.222170584Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.222318784Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.222351783Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.222377583Z" level=info msg="metadata content store policy set" policy=shared
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.222707583Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.222769383Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.222791383Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.222824883Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.222843583Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.223037183Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.223447282Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.223700782Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224128482Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224153982Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224211182Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224225882Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224249282Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224283382Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224302582Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224321982Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224336982Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224348882Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224375782Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224415382Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224431982Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224446182Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224464282Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224487782Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224502981Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224516181Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224556381Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224572481Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224585181Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224597481Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224611681Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224628781Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224653481Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224668081Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224681281Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.225080281Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.225134081Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.225150081Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.225165281Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.225176081Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.225190081Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.225200981Z" level=info msg="NRI interface is disabled by configuration."
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.225570080Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.225664580Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.225760880Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.225784780Z" level=info msg="containerd successfully booted in 0.038263s"
	Aug 07 17:55:55 functional-100700 dockerd[4430]: time="2024-08-07T17:55:55.203623721Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 07 17:55:55 functional-100700 dockerd[4430]: time="2024-08-07T17:55:55.249075279Z" level=info msg="Loading containers: start."
	Aug 07 17:55:55 functional-100700 dockerd[4430]: time="2024-08-07T17:55:55.486283283Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 07 17:55:55 functional-100700 dockerd[4430]: time="2024-08-07T17:55:55.611181043Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 07 17:55:55 functional-100700 dockerd[4430]: time="2024-08-07T17:55:55.728226393Z" level=info msg="Loading containers: done."
	Aug 07 17:55:55 functional-100700 dockerd[4430]: time="2024-08-07T17:55:55.754038026Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Aug 07 17:55:55 functional-100700 dockerd[4430]: time="2024-08-07T17:55:55.754150926Z" level=info msg="Daemon has completed initialization"
	Aug 07 17:55:55 functional-100700 dockerd[4430]: time="2024-08-07T17:55:55.805054292Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 07 17:55:55 functional-100700 systemd[1]: Started Docker Application Container Engine.
	Aug 07 17:55:55 functional-100700 dockerd[4430]: time="2024-08-07T17:55:55.817756408Z" level=info msg="API listen on [::]:2376"
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.526494067Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.526577168Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.526591169Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.526693470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.558712297Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.563728963Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.563952066Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.564587075Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.615923059Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.616533767Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.616742570Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.616989273Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.649839111Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.650906025Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.651080528Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.651280130Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.002162008Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.002309810Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.002327411Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.002784217Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.146319020Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.146402021Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.146419521Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.146546323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.186224804Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.186289605Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.186312905Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.186429907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.293246071Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.293400074Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.293416674Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.293513875Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:08 functional-100700 dockerd[4437]: time="2024-08-07T17:56:08.342920003Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:08 functional-100700 dockerd[4437]: time="2024-08-07T17:56:08.345412453Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:08 functional-100700 dockerd[4437]: time="2024-08-07T17:56:08.345440953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:08 functional-100700 dockerd[4437]: time="2024-08-07T17:56:08.346309071Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:08 functional-100700 dockerd[4437]: time="2024-08-07T17:56:08.427619805Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:08 functional-100700 dockerd[4437]: time="2024-08-07T17:56:08.427935011Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:08 functional-100700 dockerd[4437]: time="2024-08-07T17:56:08.427958412Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:08 functional-100700 dockerd[4437]: time="2024-08-07T17:56:08.428175716Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:08 functional-100700 dockerd[4437]: time="2024-08-07T17:56:08.450251060Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:08 functional-100700 dockerd[4437]: time="2024-08-07T17:56:08.450326762Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:08 functional-100700 dockerd[4437]: time="2024-08-07T17:56:08.450344662Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:08 functional-100700 dockerd[4437]: time="2024-08-07T17:56:08.450438364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:09 functional-100700 dockerd[4437]: time="2024-08-07T17:56:09.021378960Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:09 functional-100700 dockerd[4437]: time="2024-08-07T17:56:09.021447242Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:09 functional-100700 dockerd[4437]: time="2024-08-07T17:56:09.021467036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:09 functional-100700 dockerd[4437]: time="2024-08-07T17:56:09.021664985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:09 functional-100700 dockerd[4437]: time="2024-08-07T17:56:09.032269201Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:09 functional-100700 dockerd[4437]: time="2024-08-07T17:56:09.032481345Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:09 functional-100700 dockerd[4437]: time="2024-08-07T17:56:09.033742514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:09 functional-100700 dockerd[4437]: time="2024-08-07T17:56:09.034300967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:09 functional-100700 dockerd[4437]: time="2024-08-07T17:56:09.230710505Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:09 functional-100700 dockerd[4437]: time="2024-08-07T17:56:09.231303050Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:09 functional-100700 dockerd[4437]: time="2024-08-07T17:56:09.231404523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:09 functional-100700 dockerd[4437]: time="2024-08-07T17:56:09.231887696Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:59:53 functional-100700 systemd[1]: Stopping Docker Application Container Engine...
	Aug 07 17:59:53 functional-100700 dockerd[4430]: time="2024-08-07T17:59:53.701240101Z" level=info msg="Processing signal 'terminated'"
	Aug 07 17:59:53 functional-100700 dockerd[4430]: time="2024-08-07T17:59:53.891480424Z" level=info msg="ignoring event" container=011ec8239aa1c51c9f4776f7bfc897d8a3292c8e36986f970b1e2c2ca9da86fd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:59:53 functional-100700 dockerd[4430]: time="2024-08-07T17:59:53.891549927Z" level=info msg="ignoring event" container=b39dd511076447f842f8327dca684bf0a48d714c0f66e22029b88d889e068b15 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.892313355Z" level=info msg="shim disconnected" id=b39dd511076447f842f8327dca684bf0a48d714c0f66e22029b88d889e068b15 namespace=moby
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.892402158Z" level=warning msg="cleaning up after shim disconnected" id=b39dd511076447f842f8327dca684bf0a48d714c0f66e22029b88d889e068b15 namespace=moby
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.892417259Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.892816273Z" level=info msg="shim disconnected" id=011ec8239aa1c51c9f4776f7bfc897d8a3292c8e36986f970b1e2c2ca9da86fd namespace=moby
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.893079383Z" level=warning msg="cleaning up after shim disconnected" id=011ec8239aa1c51c9f4776f7bfc897d8a3292c8e36986f970b1e2c2ca9da86fd namespace=moby
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.893229989Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:59:53 functional-100700 dockerd[4430]: time="2024-08-07T17:59:53.943367240Z" level=info msg="ignoring event" container=333ea0f6bde6b1ef97dfdfac80a9d448a8898827c7df556176cd4ea6cc523a33 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.943759054Z" level=info msg="shim disconnected" id=333ea0f6bde6b1ef97dfdfac80a9d448a8898827c7df556176cd4ea6cc523a33 namespace=moby
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.943822956Z" level=warning msg="cleaning up after shim disconnected" id=333ea0f6bde6b1ef97dfdfac80a9d448a8898827c7df556176cd4ea6cc523a33 namespace=moby
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.943835757Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.963273574Z" level=info msg="shim disconnected" id=ee73743ff4167ab913696e97e9a77ab9a2b23dba89c1348473bb28a7089d45c2 namespace=moby
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.963426780Z" level=warning msg="cleaning up after shim disconnected" id=ee73743ff4167ab913696e97e9a77ab9a2b23dba89c1348473bb28a7089d45c2 namespace=moby
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.963795094Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:59:53 functional-100700 dockerd[4430]: time="2024-08-07T17:59:53.980683817Z" level=info msg="ignoring event" container=ee73743ff4167ab913696e97e9a77ab9a2b23dba89c1348473bb28a7089d45c2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:59:53 functional-100700 dockerd[4430]: time="2024-08-07T17:59:53.981327041Z" level=info msg="ignoring event" container=d57a72e940a3aff58bc3f9442b486233311d0f1b0b85603bed6541d03b52640b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:59:53 functional-100700 dockerd[4430]: time="2024-08-07T17:59:53.981517248Z" level=info msg="ignoring event" container=ff33a3021f789dbad1bd68bc673700c1dc540ec4a83d74c98c4915f4c66932e7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.983088406Z" level=info msg="shim disconnected" id=d57a72e940a3aff58bc3f9442b486233311d0f1b0b85603bed6541d03b52640b namespace=moby
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.983163809Z" level=warning msg="cleaning up after shim disconnected" id=d57a72e940a3aff58bc3f9442b486233311d0f1b0b85603bed6541d03b52640b namespace=moby
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.983176709Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4430]: time="2024-08-07T17:59:54.002058106Z" level=info msg="ignoring event" container=60d38309b3f444639ea52ae1b7c219816eaed13bbb7eb6ef22b5ecb5ee8fc804 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.002564025Z" level=info msg="shim disconnected" id=60d38309b3f444639ea52ae1b7c219816eaed13bbb7eb6ef22b5ecb5ee8fc804 namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.003062843Z" level=warning msg="cleaning up after shim disconnected" id=60d38309b3f444639ea52ae1b7c219816eaed13bbb7eb6ef22b5ecb5ee8fc804 namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.009295273Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.013796640Z" level=info msg="shim disconnected" id=623399e23aa3ba701f8a621854544b48498da36bba773f8911042b6d2561b57c namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.019740659Z" level=warning msg="cleaning up after shim disconnected" id=623399e23aa3ba701f8a621854544b48498da36bba773f8911042b6d2561b57c namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.019785161Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.008834456Z" level=info msg="shim disconnected" id=ff33a3021f789dbad1bd68bc673700c1dc540ec4a83d74c98c4915f4c66932e7 namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.021492824Z" level=warning msg="cleaning up after shim disconnected" id=ff33a3021f789dbad1bd68bc673700c1dc540ec4a83d74c98c4915f4c66932e7 namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.021550026Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4430]: time="2024-08-07T17:59:54.031232683Z" level=info msg="ignoring event" container=a781cd4bdb89586cb36144096a96bd794ac5f7417bddb500925aa03f7a83ee76 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:59:54 functional-100700 dockerd[4430]: time="2024-08-07T17:59:54.031289685Z" level=info msg="ignoring event" container=241a3e17f71d6549f6693cc1faa13d1a82113d0126d862a38ff582ea671e7704 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:59:54 functional-100700 dockerd[4430]: time="2024-08-07T17:59:54.031323187Z" level=info msg="ignoring event" container=0a22b12bc48412adcefcda1aa5f0176acba34d5ef617a5fc6e81727d4bf2fb9f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:59:54 functional-100700 dockerd[4430]: time="2024-08-07T17:59:54.031338987Z" level=info msg="ignoring event" container=125a3650c7bb9ceb749c5379c1700ef34f0671035364a39c7f8a55a9e244527f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:59:54 functional-100700 dockerd[4430]: time="2024-08-07T17:59:54.031357688Z" level=info msg="ignoring event" container=623399e23aa3ba701f8a621854544b48498da36bba773f8911042b6d2561b57c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.016580642Z" level=info msg="shim disconnected" id=a781cd4bdb89586cb36144096a96bd794ac5f7417bddb500925aa03f7a83ee76 namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.033182755Z" level=info msg="shim disconnected" id=125a3650c7bb9ceb749c5379c1700ef34f0671035364a39c7f8a55a9e244527f namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.035991259Z" level=warning msg="cleaning up after shim disconnected" id=125a3650c7bb9ceb749c5379c1700ef34f0671035364a39c7f8a55a9e244527f namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.036075162Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.036361773Z" level=warning msg="cleaning up after shim disconnected" id=a781cd4bdb89586cb36144096a96bd794ac5f7417bddb500925aa03f7a83ee76 namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.036396774Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.015009784Z" level=info msg="shim disconnected" id=241a3e17f71d6549f6693cc1faa13d1a82113d0126d862a38ff582ea671e7704 namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.040268717Z" level=warning msg="cleaning up after shim disconnected" id=241a3e17f71d6549f6693cc1faa13d1a82113d0126d862a38ff582ea671e7704 namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.040483025Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.017838089Z" level=info msg="shim disconnected" id=0a22b12bc48412adcefcda1aa5f0176acba34d5ef617a5fc6e81727d4bf2fb9f namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.056017998Z" level=warning msg="cleaning up after shim disconnected" id=0a22b12bc48412adcefcda1aa5f0176acba34d5ef617a5fc6e81727d4bf2fb9f namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.056073300Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:59:58 functional-100700 dockerd[4430]: time="2024-08-07T17:59:58.843549639Z" level=info msg="ignoring event" container=3bc20896cf9b1f1d0380b658e8e185cf6d0bedfce8143c9b50eb86a711c71a45 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:59:58 functional-100700 dockerd[4437]: time="2024-08-07T17:59:58.844293967Z" level=info msg="shim disconnected" id=3bc20896cf9b1f1d0380b658e8e185cf6d0bedfce8143c9b50eb86a711c71a45 namespace=moby
	Aug 07 17:59:58 functional-100700 dockerd[4437]: time="2024-08-07T17:59:58.844701482Z" level=warning msg="cleaning up after shim disconnected" id=3bc20896cf9b1f1d0380b658e8e185cf6d0bedfce8143c9b50eb86a711c71a45 namespace=moby
	Aug 07 17:59:58 functional-100700 dockerd[4437]: time="2024-08-07T17:59:58.845283503Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 18:00:03 functional-100700 dockerd[4430]: time="2024-08-07T18:00:03.891107278Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=ceb9a86ed09ccf71c371818d759e450667e70979c75506a5233093a4e3efe077
	Aug 07 18:00:03 functional-100700 dockerd[4430]: time="2024-08-07T18:00:03.954477534Z" level=info msg="ignoring event" container=ceb9a86ed09ccf71c371818d759e450667e70979c75506a5233093a4e3efe077 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 18:00:03 functional-100700 dockerd[4437]: time="2024-08-07T18:00:03.957207011Z" level=info msg="shim disconnected" id=ceb9a86ed09ccf71c371818d759e450667e70979c75506a5233093a4e3efe077 namespace=moby
	Aug 07 18:00:03 functional-100700 dockerd[4437]: time="2024-08-07T18:00:03.957346010Z" level=warning msg="cleaning up after shim disconnected" id=ceb9a86ed09ccf71c371818d759e450667e70979c75506a5233093a4e3efe077 namespace=moby
	Aug 07 18:00:03 functional-100700 dockerd[4437]: time="2024-08-07T18:00:03.957506408Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 18:00:04 functional-100700 dockerd[4430]: time="2024-08-07T18:00:04.021732016Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 07 18:00:04 functional-100700 dockerd[4430]: time="2024-08-07T18:00:04.022302513Z" level=info msg="Daemon shutdown complete"
	Aug 07 18:00:04 functional-100700 dockerd[4430]: time="2024-08-07T18:00:04.022522311Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 07 18:00:04 functional-100700 dockerd[4430]: time="2024-08-07T18:00:04.022549211Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 07 18:00:05 functional-100700 systemd[1]: docker.service: Deactivated successfully.
	Aug 07 18:00:05 functional-100700 systemd[1]: Stopped Docker Application Container Engine.
	Aug 07 18:00:05 functional-100700 systemd[1]: docker.service: Consumed 8.144s CPU time.
	Aug 07 18:00:05 functional-100700 systemd[1]: Starting Docker Application Container Engine...
	Aug 07 18:00:05 functional-100700 dockerd[8031]: time="2024-08-07T18:00:05.087273572Z" level=info msg="Starting up"
	Aug 07 18:01:05 functional-100700 dockerd[8031]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 07 18:01:05 functional-100700 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 07 18:01:05 functional-100700 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 07 18:01:05 functional-100700 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0807 18:01:05.216074    2092 out.go:239] * 
	W0807 18:01:05.217856    2092 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0807 18:01:05.222151    2092 out.go:177] 
	
	
	==> Docker <==
	Aug 07 18:06:06 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:06:06Z" level=error msg="Set backoffDuration to : 1m0s for container ID '8e6d65d222ddafeb197a6bdeeb312ce4cb757e9ed4da126acc744cc3b9f1550e'"
	Aug 07 18:06:06 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:06:06Z" level=error msg="error getting RW layer size for container ID 'ceb9a86ed09ccf71c371818d759e450667e70979c75506a5233093a4e3efe077': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/ceb9a86ed09ccf71c371818d759e450667e70979c75506a5233093a4e3efe077/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:06:06 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:06:06Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'ceb9a86ed09ccf71c371818d759e450667e70979c75506a5233093a4e3efe077'"
	Aug 07 18:06:06 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:06:06Z" level=error msg="error getting RW layer size for container ID '76120dfe1c32eb89c2bb01fc16ebaad087df9e517d03ed5169fb50db579cfe65': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/76120dfe1c32eb89c2bb01fc16ebaad087df9e517d03ed5169fb50db579cfe65/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:06:06 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:06:06Z" level=error msg="Set backoffDuration to : 1m0s for container ID '76120dfe1c32eb89c2bb01fc16ebaad087df9e517d03ed5169fb50db579cfe65'"
	Aug 07 18:06:06 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:06:06Z" level=error msg="error getting RW layer size for container ID '9f7b90986285c55035e0f34c86f5eaccab75a819c487d2fb0ea04853921c13bf': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/9f7b90986285c55035e0f34c86f5eaccab75a819c487d2fb0ea04853921c13bf/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:06:06 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:06:06Z" level=error msg="Set backoffDuration to : 1m0s for container ID '9f7b90986285c55035e0f34c86f5eaccab75a819c487d2fb0ea04853921c13bf'"
	Aug 07 18:06:06 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:06:06Z" level=error msg="error getting RW layer size for container ID '60d38309b3f444639ea52ae1b7c219816eaed13bbb7eb6ef22b5ecb5ee8fc804': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/60d38309b3f444639ea52ae1b7c219816eaed13bbb7eb6ef22b5ecb5ee8fc804/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:06:06 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:06:06Z" level=error msg="Set backoffDuration to : 1m0s for container ID '60d38309b3f444639ea52ae1b7c219816eaed13bbb7eb6ef22b5ecb5ee8fc804'"
	Aug 07 18:06:06 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:06:06Z" level=error msg="error getting RW layer size for container ID 'a781cd4bdb89586cb36144096a96bd794ac5f7417bddb500925aa03f7a83ee76': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/a781cd4bdb89586cb36144096a96bd794ac5f7417bddb500925aa03f7a83ee76/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:06:06 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:06:06Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'a781cd4bdb89586cb36144096a96bd794ac5f7417bddb500925aa03f7a83ee76'"
	Aug 07 18:06:06 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:06:06Z" level=error msg="error getting RW layer size for container ID '8257548df8d0db4a3326d585a67cf2b31ec7c8b63a2fb41d0009d9c5fa124c3b': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/8257548df8d0db4a3326d585a67cf2b31ec7c8b63a2fb41d0009d9c5fa124c3b/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:06:06 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:06:06Z" level=error msg="Set backoffDuration to : 1m0s for container ID '8257548df8d0db4a3326d585a67cf2b31ec7c8b63a2fb41d0009d9c5fa124c3b'"
	Aug 07 18:06:06 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:06:06Z" level=error msg="error getting RW layer size for container ID '333ea0f6bde6b1ef97dfdfac80a9d448a8898827c7df556176cd4ea6cc523a33': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/333ea0f6bde6b1ef97dfdfac80a9d448a8898827c7df556176cd4ea6cc523a33/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:06:06 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:06:06Z" level=error msg="Set backoffDuration to : 1m0s for container ID '333ea0f6bde6b1ef97dfdfac80a9d448a8898827c7df556176cd4ea6cc523a33'"
	Aug 07 18:06:06 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:06:06Z" level=error msg="error getting RW layer size for container ID '88ef6e03a7d49636d4ec885944dcc92b4c233c0740da2e8f511ce9078e817eb9': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/88ef6e03a7d49636d4ec885944dcc92b4c233c0740da2e8f511ce9078e817eb9/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:06:06 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:06:06Z" level=error msg="Set backoffDuration to : 1m0s for container ID '88ef6e03a7d49636d4ec885944dcc92b4c233c0740da2e8f511ce9078e817eb9'"
	Aug 07 18:06:06 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:06:06Z" level=error msg="error getting RW layer size for container ID '03079679d68cc19538e02f39c48251b50393f7dd981e609af3dcc96c420cd29e': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/03079679d68cc19538e02f39c48251b50393f7dd981e609af3dcc96c420cd29e/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:06:06 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:06:06Z" level=error msg="Set backoffDuration to : 1m0s for container ID '03079679d68cc19538e02f39c48251b50393f7dd981e609af3dcc96c420cd29e'"
	Aug 07 18:06:06 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:06:06Z" level=error msg="error getting RW layer size for container ID '623399e23aa3ba701f8a621854544b48498da36bba773f8911042b6d2561b57c': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/623399e23aa3ba701f8a621854544b48498da36bba773f8911042b6d2561b57c/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:06:06 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:06:06Z" level=error msg="Set backoffDuration to : 1m0s for container ID '623399e23aa3ba701f8a621854544b48498da36bba773f8911042b6d2561b57c'"
	Aug 07 18:06:06 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:06:06Z" level=error msg="error getting RW layer size for container ID '1ca5873cb027b5d4205ca4cdff1109b4f47c0b8c5f18ae986a0b932c8d9584f4': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/1ca5873cb027b5d4205ca4cdff1109b4f47c0b8c5f18ae986a0b932c8d9584f4/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:06:06 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:06:06Z" level=error msg="Set backoffDuration to : 1m0s for container ID '1ca5873cb027b5d4205ca4cdff1109b4f47c0b8c5f18ae986a0b932c8d9584f4'"
	Aug 07 18:06:06 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:06:06Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get image list from docker"
	Aug 07 18:06:06 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:06:06Z" level=error msg="Unable to get docker version: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/version\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-08-07T18:06:06Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unknown desc = failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +15.852357] systemd-fstab-generator[2536]: Ignoring "noauto" option for root device
	[  +0.197291] kauditd_printk_skb: 12 callbacks suppressed
	[  +8.405097] kauditd_printk_skb: 88 callbacks suppressed
	[Aug 7 17:54] kauditd_printk_skb: 10 callbacks suppressed
	[Aug 7 17:55] systemd-fstab-generator[3950]: Ignoring "noauto" option for root device
	[  +0.649998] systemd-fstab-generator[3985]: Ignoring "noauto" option for root device
	[  +0.264046] systemd-fstab-generator[3997]: Ignoring "noauto" option for root device
	[  +0.315437] systemd-fstab-generator[4011]: Ignoring "noauto" option for root device
	[  +5.331377] kauditd_printk_skb: 89 callbacks suppressed
	[  +8.070413] systemd-fstab-generator[4649]: Ignoring "noauto" option for root device
	[  +0.207492] systemd-fstab-generator[4660]: Ignoring "noauto" option for root device
	[  +0.210288] systemd-fstab-generator[4672]: Ignoring "noauto" option for root device
	[  +0.283847] systemd-fstab-generator[4687]: Ignoring "noauto" option for root device
	[  +0.989188] systemd-fstab-generator[4863]: Ignoring "noauto" option for root device
	[Aug 7 17:56] systemd-fstab-generator[4990]: Ignoring "noauto" option for root device
	[  +0.108824] kauditd_printk_skb: 137 callbacks suppressed
	[  +6.503684] kauditd_printk_skb: 52 callbacks suppressed
	[ +11.988991] systemd-fstab-generator[5884]: Ignoring "noauto" option for root device
	[  +0.145733] kauditd_printk_skb: 31 callbacks suppressed
	[Aug 7 17:59] systemd-fstab-generator[7536]: Ignoring "noauto" option for root device
	[  +0.151220] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.513441] systemd-fstab-generator[7585]: Ignoring "noauto" option for root device
	[  +0.282699] systemd-fstab-generator[7598]: Ignoring "noauto" option for root device
	[  +0.340219] systemd-fstab-generator[7612]: Ignoring "noauto" option for root device
	[  +5.344642] kauditd_printk_skb: 89 callbacks suppressed
	
	
	==> kernel <==
	 18:07:06 up 15 min,  0 users,  load average: 0.00, 0.08, 0.14
	Linux functional-100700 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Aug 07 18:07:01 functional-100700 kubelet[4998]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 07 18:07:02 functional-100700 kubelet[4998]: E0807 18:07:02.708185    4998 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-100700?timeout=10s\": dial tcp 172.28.235.211:8441: connect: connection refused" interval="7s"
	Aug 07 18:07:03 functional-100700 kubelet[4998]: E0807 18:07:03.750857    4998 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-100700\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-100700?resourceVersion=0&timeout=10s\": dial tcp 172.28.235.211:8441: connect: connection refused"
	Aug 07 18:07:03 functional-100700 kubelet[4998]: E0807 18:07:03.751865    4998 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-100700\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-100700?timeout=10s\": dial tcp 172.28.235.211:8441: connect: connection refused"
	Aug 07 18:07:03 functional-100700 kubelet[4998]: E0807 18:07:03.752957    4998 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-100700\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-100700?timeout=10s\": dial tcp 172.28.235.211:8441: connect: connection refused"
	Aug 07 18:07:03 functional-100700 kubelet[4998]: E0807 18:07:03.754080    4998 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-100700\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-100700?timeout=10s\": dial tcp 172.28.235.211:8441: connect: connection refused"
	Aug 07 18:07:03 functional-100700 kubelet[4998]: E0807 18:07:03.754778    4998 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-100700\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-100700?timeout=10s\": dial tcp 172.28.235.211:8441: connect: connection refused"
	Aug 07 18:07:03 functional-100700 kubelet[4998]: E0807 18:07:03.754819    4998 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	Aug 07 18:07:04 functional-100700 kubelet[4998]: E0807 18:07:04.110145    4998 kubelet.go:2370] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 7m10.853095954s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	Aug 07 18:07:06 functional-100700 kubelet[4998]: E0807 18:07:06.616365    4998 remote_runtime.go:294] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Aug 07 18:07:06 functional-100700 kubelet[4998]: E0807 18:07:06.616760    4998 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:07:06 functional-100700 kubelet[4998]: E0807 18:07:06.616883    4998 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:07:06 functional-100700 kubelet[4998]: E0807 18:07:06.617042    4998 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Aug 07 18:07:06 functional-100700 kubelet[4998]: E0807 18:07:06.617763    4998 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:07:06 functional-100700 kubelet[4998]: E0807 18:07:06.617323    4998 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Aug 07 18:07:06 functional-100700 kubelet[4998]: E0807 18:07:06.618508    4998 container_log_manager.go:194] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:07:06 functional-100700 kubelet[4998]: E0807 18:07:06.618414    4998 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Aug 07 18:07:06 functional-100700 kubelet[4998]: E0807 18:07:06.618891    4998 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:07:06 functional-100700 kubelet[4998]: E0807 18:07:06.618347    4998 remote_image.go:232] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:07:06 functional-100700 kubelet[4998]: E0807 18:07:06.619187    4998 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:07:06 functional-100700 kubelet[4998]: E0807 18:07:06.617415    4998 kubelet.go:2919] "Container runtime not ready" runtimeReady="RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/version\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:07:06 functional-100700 kubelet[4998]: I0807 18:07:06.619118    4998 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:07:06 functional-100700 kubelet[4998]: E0807 18:07:06.619270    4998 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Aug 07 18:07:06 functional-100700 kubelet[4998]: E0807 18:07:06.619300    4998 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Aug 07 18:07:06 functional-100700 kubelet[4998]: E0807 18:07:06.619562    4998 kubelet.go:1436] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0807 18:04:33.040025    3560 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0807 18:05:06.109785    3560 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0807 18:05:06.145546    3560 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0807 18:05:06.180669    3560 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0807 18:05:06.214934    3560 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0807 18:05:06.251756    3560 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0807 18:05:06.286464    3560 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0807 18:05:06.322956    3560 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0807 18:06:06.408720    3560 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-100700 -n functional-100700
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-100700 -n functional-100700: exit status 2 (11.9584634s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0807 18:07:07.352654    4392 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-100700" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (180.41s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.27s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-100700 apply -f testdata\invalidsvc.yaml
functional_test.go:2317: (dbg) Non-zero exit: kubectl --context functional-100700 apply -f testdata\invalidsvc.yaml: exit status 1 (4.2613889s)

                                                
                                                
** stderr ** 
	error: error validating "testdata\\invalidsvc.yaml": error validating data: failed to download openapi: Get "https://172.28.235.211:8441/openapi/v2?timeout=32s": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test.go:2319: kubectl --context functional-100700 apply -f testdata\invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (4.27s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (1.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-100700 config unset cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-100700 config unset cpus" to be -""- but got *"W0807 18:12:12.549890    9292 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-100700 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-100700 config get cpus: exit status 14 (278.3912ms)

                                                
                                                
** stderr ** 
	W0807 18:12:12.862774    9552 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-100700 config get cpus" to be -"Error: specified key could not be found in config"- but got *"W0807 18:12:12.862774    9552 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\nError: specified key could not be found in config"*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-100700 config set cpus 2
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-100700 config set cpus 2" to be -"! These changes will take effect upon a minikube delete and then a minikube start"- but got *"W0807 18:12:13.165001    7092 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n! These changes will take effect upon a minikube delete and then a minikube start"*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-100700 config get cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-100700 config get cpus" to be -""- but got *"W0807 18:12:13.502213    2344 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-100700 config unset cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-100700 config unset cpus" to be -""- but got *"W0807 18:12:13.784720    5848 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-100700 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-100700 config get cpus: exit status 14 (252.3759ms)

                                                
                                                
** stderr ** 
	W0807 18:12:14.070845    2492 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-100700 config get cpus" to be -"Error: specified key could not be found in config"- but got *"W0807 18:12:14.070845    2492 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\nError: specified key could not be found in config"*
--- FAIL: TestFunctional/parallel/ConfigCmd (1.80s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (286.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-100700 status
functional_test.go:850: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-100700 status: exit status 2 (12.5738119s)

                                                
                                                
-- stdout --
	functional-100700
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0807 18:23:39.860583    8496 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:852: failed to run minikube status. args "out/minikube-windows-amd64.exe -p functional-100700 status" : exit status 2
functional_test.go:856: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-100700 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-100700 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 2 (12.300287s)

                                                
                                                
-- stdout --
	host:Running,kublet:Running,apiserver:Stopped,kubeconfig:Configured

                                                
                                                
-- /stdout --
** stderr ** 
	W0807 18:23:52.422574    3828 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:858: failed to run minikube status with custom format: args "out/minikube-windows-amd64.exe -p functional-100700 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 2
functional_test.go:868: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-100700 status -o json
functional_test.go:868: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-100700 status -o json: exit status 2 (12.8695695s)

                                                
                                                
-- stdout --
	{"Name":"functional-100700","Host":"Running","Kubelet":"Running","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	W0807 18:24:04.724143    4608 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:870: failed to run minikube status with json output. args "out/minikube-windows-amd64.exe -p functional-100700 status -o json" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-100700 -n functional-100700
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-100700 -n functional-100700: exit status 2 (12.9038036s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0807 18:24:17.616238    6748 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/parallel/StatusCmd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/StatusCmd]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-100700 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-100700 logs -n 25: (3m41.4965485s)
helpers_test.go:252: TestFunctional/parallel/StatusCmd logs: 
-- stdout --
	
	==> Audit <==
	|-----------|-----------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	|  Command  |                                 Args                                  |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|-----------|-----------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| ssh       | functional-100700 ssh -n                                              | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:13 UTC | 07 Aug 24 18:13 UTC |
	|           | functional-100700 sudo cat                                            |                   |                   |         |                     |                     |
	|           | /home/docker/cp-test.txt                                              |                   |                   |         |                     |                     |
	| ssh       | functional-100700 ssh sudo cat                                        | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:13 UTC | 07 Aug 24 18:13 UTC |
	|           | /etc/ssl/certs/3ec20f2e.0                                             |                   |                   |         |                     |                     |
	| cp        | functional-100700 cp                                                  | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:13 UTC | 07 Aug 24 18:13 UTC |
	|           | testdata\cp-test.txt                                                  |                   |                   |         |                     |                     |
	|           | /tmp/does/not/exist/cp-test.txt                                       |                   |                   |         |                     |                     |
	| ssh       | functional-100700 ssh -n                                              | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:13 UTC | 07 Aug 24 18:13 UTC |
	|           | functional-100700 sudo cat                                            |                   |                   |         |                     |                     |
	|           | /tmp/does/not/exist/cp-test.txt                                       |                   |                   |         |                     |                     |
	| tunnel    | functional-100700 tunnel                                              | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:13 UTC |                     |
	|           | --alsologtostderr                                                     |                   |                   |         |                     |                     |
	| tunnel    | functional-100700 tunnel                                              | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:13 UTC |                     |
	|           | --alsologtostderr                                                     |                   |                   |         |                     |                     |
	| tunnel    | functional-100700 tunnel                                              | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:13 UTC |                     |
	|           | --alsologtostderr                                                     |                   |                   |         |                     |                     |
	| addons    | functional-100700 addons list                                         | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:13 UTC | 07 Aug 24 18:13 UTC |
	| addons    | functional-100700 addons list                                         | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:13 UTC | 07 Aug 24 18:13 UTC |
	|           | -o json                                                               |                   |                   |         |                     |                     |
	| service   | functional-100700 service list                                        | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:17 UTC |                     |
	| service   | functional-100700 service list                                        | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:17 UTC |                     |
	|           | -o json                                                               |                   |                   |         |                     |                     |
	| service   | functional-100700 service                                             | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:17 UTC |                     |
	|           | --namespace=default --https                                           |                   |                   |         |                     |                     |
	|           | --url hello-node                                                      |                   |                   |         |                     |                     |
	| service   | functional-100700                                                     | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:17 UTC |                     |
	|           | service hello-node --url                                              |                   |                   |         |                     |                     |
	|           | --format={{.IP}}                                                      |                   |                   |         |                     |                     |
	| service   | functional-100700 service                                             | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:17 UTC |                     |
	|           | hello-node --url                                                      |                   |                   |         |                     |                     |
	| image     | functional-100700 image load --daemon                                 | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:18 UTC | 07 Aug 24 18:19 UTC |
	|           | docker.io/kicbase/echo-server:functional-100700                       |                   |                   |         |                     |                     |
	|           | --alsologtostderr                                                     |                   |                   |         |                     |                     |
	| image     | functional-100700 image ls                                            | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:19 UTC | 07 Aug 24 18:20 UTC |
	| image     | functional-100700 image load --daemon                                 | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:20 UTC | 07 Aug 24 18:21 UTC |
	|           | docker.io/kicbase/echo-server:functional-100700                       |                   |                   |         |                     |                     |
	|           | --alsologtostderr                                                     |                   |                   |         |                     |                     |
	| image     | functional-100700 image ls                                            | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:21 UTC | 07 Aug 24 18:22 UTC |
	| image     | functional-100700 image load --daemon                                 | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:22 UTC | 07 Aug 24 18:23 UTC |
	|           | docker.io/kicbase/echo-server:functional-100700                       |                   |                   |         |                     |                     |
	|           | --alsologtostderr                                                     |                   |                   |         |                     |                     |
	| image     | functional-100700 image ls                                            | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:23 UTC | 07 Aug 24 18:24 UTC |
	| ssh       | functional-100700 ssh sudo                                            | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:23 UTC |                     |
	|           | systemctl is-active crio                                              |                   |                   |         |                     |                     |
	| start     | -p functional-100700                                                  | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:23 UTC |                     |
	|           | --dry-run --memory                                                    |                   |                   |         |                     |                     |
	|           | 250MB --alsologtostderr                                               |                   |                   |         |                     |                     |
	|           | --driver=hyperv                                                       |                   |                   |         |                     |                     |
	| image     | functional-100700 image save                                          | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:24 UTC |                     |
	|           | docker.io/kicbase/echo-server:functional-100700                       |                   |                   |         |                     |                     |
	|           | C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar |                   |                   |         |                     |                     |
	|           | --alsologtostderr                                                     |                   |                   |         |                     |                     |
	| start     | -p functional-100700                                                  | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:24 UTC |                     |
	|           | --dry-run --memory                                                    |                   |                   |         |                     |                     |
	|           | 250MB --alsologtostderr                                               |                   |                   |         |                     |                     |
	|           | --driver=hyperv                                                       |                   |                   |         |                     |                     |
	| dashboard | --url --port 36195                                                    | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:24 UTC |                     |
	|           | -p functional-100700                                                  |                   |                   |         |                     |                     |
	|           | --alsologtostderr -v=1                                                |                   |                   |         |                     |                     |
	|-----------|-----------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/07 18:24:24
	Running on machine: minikube6
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0807 18:24:24.553465    1512 out.go:291] Setting OutFile to fd 1380 ...
	I0807 18:24:24.554197    1512 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 18:24:24.554197    1512 out.go:304] Setting ErrFile to fd 1192...
	I0807 18:24:24.554197    1512 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 18:24:24.578349    1512 out.go:298] Setting JSON to false
	I0807 18:24:24.581343    1512 start.go:129] hostinfo: {"hostname":"minikube6","uptime":316994,"bootTime":1722738070,"procs":196,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4717 Build 19045.4717","kernelVersion":"10.0.19045.4717 Build 19045.4717","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0807 18:24:24.581343    1512 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0807 18:24:24.585829    1512 out.go:177] * [functional-100700] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4717 Build 19045.4717
	I0807 18:24:24.588564    1512 notify.go:220] Checking for updates...
	I0807 18:24:24.588917    1512 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0807 18:24:24.591775    1512 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0807 18:24:24.594388    1512 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0807 18:24:24.596578    1512 out.go:177]   - MINIKUBE_LOCATION=19389
	I0807 18:24:24.600850    1512 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	
	==> Docker <==
	Aug 07 18:27:11 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:27:11Z" level=error msg="error getting RW layer size for container ID '8e6d65d222ddafeb197a6bdeeb312ce4cb757e9ed4da126acc744cc3b9f1550e': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/8e6d65d222ddafeb197a6bdeeb312ce4cb757e9ed4da126acc744cc3b9f1550e/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:27:11 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:27:11Z" level=error msg="Set backoffDuration to : 1m0s for container ID '8e6d65d222ddafeb197a6bdeeb312ce4cb757e9ed4da126acc744cc3b9f1550e'"
	Aug 07 18:27:11 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:27:11Z" level=error msg="error getting RW layer size for container ID '1ca5873cb027b5d4205ca4cdff1109b4f47c0b8c5f18ae986a0b932c8d9584f4': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/1ca5873cb027b5d4205ca4cdff1109b4f47c0b8c5f18ae986a0b932c8d9584f4/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:27:11 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:27:11Z" level=error msg="Set backoffDuration to : 1m0s for container ID '1ca5873cb027b5d4205ca4cdff1109b4f47c0b8c5f18ae986a0b932c8d9584f4'"
	Aug 07 18:27:11 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:27:11Z" level=error msg="error getting RW layer size for container ID '9f7b90986285c55035e0f34c86f5eaccab75a819c487d2fb0ea04853921c13bf': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/9f7b90986285c55035e0f34c86f5eaccab75a819c487d2fb0ea04853921c13bf/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:27:11 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:27:11Z" level=error msg="Set backoffDuration to : 1m0s for container ID '9f7b90986285c55035e0f34c86f5eaccab75a819c487d2fb0ea04853921c13bf'"
	Aug 07 18:27:11 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:27:11Z" level=error msg="error getting RW layer size for container ID '3bc20896cf9b1f1d0380b658e8e185cf6d0bedfce8143c9b50eb86a711c71a45': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/3bc20896cf9b1f1d0380b658e8e185cf6d0bedfce8143c9b50eb86a711c71a45/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:27:11 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:27:11Z" level=error msg="Set backoffDuration to : 1m0s for container ID '3bc20896cf9b1f1d0380b658e8e185cf6d0bedfce8143c9b50eb86a711c71a45'"
	Aug 07 18:27:11 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:27:11Z" level=error msg="error getting RW layer size for container ID '88ef6e03a7d49636d4ec885944dcc92b4c233c0740da2e8f511ce9078e817eb9': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/88ef6e03a7d49636d4ec885944dcc92b4c233c0740da2e8f511ce9078e817eb9/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:27:11 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:27:11Z" level=error msg="Set backoffDuration to : 1m0s for container ID '88ef6e03a7d49636d4ec885944dcc92b4c233c0740da2e8f511ce9078e817eb9'"
	Aug 07 18:27:11 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:27:11Z" level=error msg="error getting RW layer size for container ID '623399e23aa3ba701f8a621854544b48498da36bba773f8911042b6d2561b57c': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/623399e23aa3ba701f8a621854544b48498da36bba773f8911042b6d2561b57c/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:27:11 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:27:11Z" level=error msg="Set backoffDuration to : 1m0s for container ID '623399e23aa3ba701f8a621854544b48498da36bba773f8911042b6d2561b57c'"
	Aug 07 18:27:11 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:27:11Z" level=error msg="error getting RW layer size for container ID 'ceb9a86ed09ccf71c371818d759e450667e70979c75506a5233093a4e3efe077': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/ceb9a86ed09ccf71c371818d759e450667e70979c75506a5233093a4e3efe077/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:27:11 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:27:11Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'ceb9a86ed09ccf71c371818d759e450667e70979c75506a5233093a4e3efe077'"
	Aug 07 18:27:11 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:27:11Z" level=error msg="error getting RW layer size for container ID '76120dfe1c32eb89c2bb01fc16ebaad087df9e517d03ed5169fb50db579cfe65': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/76120dfe1c32eb89c2bb01fc16ebaad087df9e517d03ed5169fb50db579cfe65/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:27:11 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:27:11Z" level=error msg="Set backoffDuration to : 1m0s for container ID '76120dfe1c32eb89c2bb01fc16ebaad087df9e517d03ed5169fb50db579cfe65'"
	Aug 07 18:27:11 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:27:11Z" level=error msg="error getting RW layer size for container ID '8257548df8d0db4a3326d585a67cf2b31ec7c8b63a2fb41d0009d9c5fa124c3b': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/8257548df8d0db4a3326d585a67cf2b31ec7c8b63a2fb41d0009d9c5fa124c3b/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:27:11 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:27:11Z" level=error msg="Set backoffDuration to : 1m0s for container ID '8257548df8d0db4a3326d585a67cf2b31ec7c8b63a2fb41d0009d9c5fa124c3b'"
	Aug 07 18:27:11 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:27:11Z" level=error msg="error getting RW layer size for container ID '60d38309b3f444639ea52ae1b7c219816eaed13bbb7eb6ef22b5ecb5ee8fc804': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/60d38309b3f444639ea52ae1b7c219816eaed13bbb7eb6ef22b5ecb5ee8fc804/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:27:11 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:27:11Z" level=error msg="Set backoffDuration to : 1m0s for container ID '60d38309b3f444639ea52ae1b7c219816eaed13bbb7eb6ef22b5ecb5ee8fc804'"
	Aug 07 18:27:11 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:27:11Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get image list from docker"
	Aug 07 18:27:11 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:27:11Z" level=error msg="error getting RW layer size for container ID 'a781cd4bdb89586cb36144096a96bd794ac5f7417bddb500925aa03f7a83ee76': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/a781cd4bdb89586cb36144096a96bd794ac5f7417bddb500925aa03f7a83ee76/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:27:11 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:27:11Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'a781cd4bdb89586cb36144096a96bd794ac5f7417bddb500925aa03f7a83ee76'"
	Aug 07 18:27:11 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:27:11Z" level=error msg="error getting RW layer size for container ID '03079679d68cc19538e02f39c48251b50393f7dd981e609af3dcc96c420cd29e': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/03079679d68cc19538e02f39c48251b50393f7dd981e609af3dcc96c420cd29e/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:27:11 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:27:11Z" level=error msg="Set backoffDuration to : 1m0s for container ID '03079679d68cc19538e02f39c48251b50393f7dd981e609af3dcc96c420cd29e'"
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-08-07T18:27:13Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.315437] systemd-fstab-generator[4011]: Ignoring "noauto" option for root device
	[  +5.331377] kauditd_printk_skb: 89 callbacks suppressed
	[  +8.070413] systemd-fstab-generator[4649]: Ignoring "noauto" option for root device
	[  +0.207492] systemd-fstab-generator[4660]: Ignoring "noauto" option for root device
	[  +0.210288] systemd-fstab-generator[4672]: Ignoring "noauto" option for root device
	[  +0.283847] systemd-fstab-generator[4687]: Ignoring "noauto" option for root device
	[  +0.989188] systemd-fstab-generator[4863]: Ignoring "noauto" option for root device
	[Aug 7 17:56] systemd-fstab-generator[4990]: Ignoring "noauto" option for root device
	[  +0.108824] kauditd_printk_skb: 137 callbacks suppressed
	[  +6.503684] kauditd_printk_skb: 52 callbacks suppressed
	[ +11.988991] systemd-fstab-generator[5884]: Ignoring "noauto" option for root device
	[  +0.145733] kauditd_printk_skb: 31 callbacks suppressed
	[Aug 7 17:59] systemd-fstab-generator[7536]: Ignoring "noauto" option for root device
	[  +0.151220] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.513441] systemd-fstab-generator[7585]: Ignoring "noauto" option for root device
	[  +0.282699] systemd-fstab-generator[7598]: Ignoring "noauto" option for root device
	[  +0.340219] systemd-fstab-generator[7612]: Ignoring "noauto" option for root device
	[  +5.344642] kauditd_printk_skb: 89 callbacks suppressed
	[Aug 7 18:12] systemd-fstab-generator[11104]: Ignoring "noauto" option for root device
	[Aug 7 18:13] systemd-fstab-generator[11514]: Ignoring "noauto" option for root device
	[  +0.128359] kauditd_printk_skb: 12 callbacks suppressed
	[Aug 7 18:17] systemd-fstab-generator[12657]: Ignoring "noauto" option for root device
	[  +0.123972] kauditd_printk_skb: 12 callbacks suppressed
	[Aug 7 18:18] systemd-fstab-generator[13086]: Ignoring "noauto" option for root device
	[  +0.146348] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 18:28:11 up 37 min,  0 users,  load average: 0.13, 0.06, 0.06
	Linux functional-100700 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Aug 07 18:28:08 functional-100700 kubelet[4998]: E0807 18:28:08.884332    4998 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-100700\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-100700?resourceVersion=0&timeout=10s\": dial tcp 172.28.235.211:8441: connect: connection refused"
	Aug 07 18:28:08 functional-100700 kubelet[4998]: E0807 18:28:08.885269    4998 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-100700\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-100700?timeout=10s\": dial tcp 172.28.235.211:8441: connect: connection refused"
	Aug 07 18:28:08 functional-100700 kubelet[4998]: E0807 18:28:08.886715    4998 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-100700\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-100700?timeout=10s\": dial tcp 172.28.235.211:8441: connect: connection refused"
	Aug 07 18:28:08 functional-100700 kubelet[4998]: E0807 18:28:08.887896    4998 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-100700\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-100700?timeout=10s\": dial tcp 172.28.235.211:8441: connect: connection refused"
	Aug 07 18:28:08 functional-100700 kubelet[4998]: E0807 18:28:08.889065    4998 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-100700\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-100700?timeout=10s\": dial tcp 172.28.235.211:8441: connect: connection refused"
	Aug 07 18:28:08 functional-100700 kubelet[4998]: E0807 18:28:08.889106    4998 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	Aug 07 18:28:09 functional-100700 kubelet[4998]: E0807 18:28:09.349589    4998 kubelet.go:2370] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 28m16.09254283s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/version\": read unix @->/run/docker.sock: read: connection reset by peer]"
	Aug 07 18:28:10 functional-100700 kubelet[4998]: E0807 18:28:10.171084    4998 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-100700?timeout=10s\": dial tcp 172.28.235.211:8441: connect: connection refused" interval="7s"
	Aug 07 18:28:11 functional-100700 kubelet[4998]: E0807 18:28:11.346218    4998 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events/kube-apiserver-functional-100700.17e9841fb36ee3f5\": dial tcp 172.28.235.211:8441: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-functional-100700.17e9841fb36ee3f5  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-functional-100700,UID:00b3db9060a30b06edb713820a5caeb5,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://172.28.235.211:8441/readyz\": dial tcp 172.28.235.211:8441: connect: connection refused,Source:EventSource{Component:kubelet,Host:functional-100700,},FirstTimestamp:2024-08-07 18:00:04.135166965 +0000 UTC m=+242.626871687,LastTimes
tamp:2024-08-07 18:00:08.130597523 +0000 UTC m=+246.622302345,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-100700,}"
	Aug 07 18:28:11 functional-100700 kubelet[4998]: E0807 18:28:11.635532    4998 remote_runtime.go:294] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Aug 07 18:28:11 functional-100700 kubelet[4998]: E0807 18:28:11.636545    4998 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:28:11 functional-100700 kubelet[4998]: E0807 18:28:11.636825    4998 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:28:11 functional-100700 kubelet[4998]: E0807 18:28:11.641320    4998 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Aug 07 18:28:11 functional-100700 kubelet[4998]: E0807 18:28:11.641554    4998 container_log_manager.go:194] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:28:11 functional-100700 kubelet[4998]: E0807 18:28:11.641594    4998 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Aug 07 18:28:11 functional-100700 kubelet[4998]: E0807 18:28:11.641624    4998 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:28:11 functional-100700 kubelet[4998]: E0807 18:28:11.642281    4998 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Aug 07 18:28:11 functional-100700 kubelet[4998]: E0807 18:28:11.642360    4998 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:28:11 functional-100700 kubelet[4998]: I0807 18:28:11.642378    4998 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:28:11 functional-100700 kubelet[4998]: E0807 18:28:11.642452    4998 remote_image.go:232] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:28:11 functional-100700 kubelet[4998]: E0807 18:28:11.642544    4998 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:28:11 functional-100700 kubelet[4998]: E0807 18:28:11.643215    4998 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Aug 07 18:28:11 functional-100700 kubelet[4998]: E0807 18:28:11.644993    4998 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Aug 07 18:28:11 functional-100700 kubelet[4998]: E0807 18:28:11.646510    4998 kubelet.go:1436] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	Aug 07 18:28:11 functional-100700 kubelet[4998]: E0807 18:28:11.810089    4998 kubelet.go:2919] "Container runtime not ready" runtimeReady="RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/version\": read unix @->/run/docker.sock: read: connection reset by peer"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0807 18:24:30.506918    8344 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0807 18:25:10.962693    8344 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0807 18:25:10.995951    8344 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0807 18:25:11.040649    8344 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0807 18:26:11.171722    8344 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0807 18:26:11.207699    8344 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0807 18:26:11.262340    8344 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0807 18:27:11.422039    8344 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0807 18:27:11.455545    8344 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-100700 -n functional-100700
E0807 18:28:20.471181    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\client.crt: The system cannot find the path specified.
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-100700 -n functional-100700: exit status 2 (14.1396598s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0807 18:28:12.040388    8788 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-100700" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/StatusCmd (286.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (273.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-100700 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1625: (dbg) Non-zero exit: kubectl --context functional-100700 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8: exit status 1 (2.1611711s)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://172.28.235.211:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.

                                                
                                                
** /stderr **
functional_test.go:1629: failed to create hello-node deployment with this command "kubectl --context functional-100700 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8": exit status 1.
functional_test.go:1594: service test failed - dumping debug information
functional_test.go:1595: -----------------------service failure post-mortem--------------------------------
functional_test.go:1598: (dbg) Run:  kubectl --context functional-100700 describe po hello-node-connect
functional_test.go:1598: (dbg) Non-zero exit: kubectl --context functional-100700 describe po hello-node-connect: exit status 1 (2.1622455s)

                                                
                                                
** stderr ** 
	Unable to connect to the server: dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.

                                                
                                                
** /stderr **
functional_test.go:1600: "kubectl --context functional-100700 describe po hello-node-connect" failed: exit status 1
functional_test.go:1602: hello-node pod describe:
functional_test.go:1604: (dbg) Run:  kubectl --context functional-100700 logs -l app=hello-node-connect
functional_test.go:1604: (dbg) Non-zero exit: kubectl --context functional-100700 logs -l app=hello-node-connect: exit status 1 (2.1894029s)

                                                
                                                
** stderr ** 
	Unable to connect to the server: dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.

                                                
                                                
** /stderr **
functional_test.go:1606: "kubectl --context functional-100700 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1608: hello-node logs:
functional_test.go:1610: (dbg) Run:  kubectl --context functional-100700 describe svc hello-node-connect
functional_test.go:1610: (dbg) Non-zero exit: kubectl --context functional-100700 describe svc hello-node-connect: exit status 1 (2.1902584s)

                                                
                                                
** stderr ** 
	Unable to connect to the server: dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.

                                                
                                                
** /stderr **
functional_test.go:1612: "kubectl --context functional-100700 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1614: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-100700 -n functional-100700
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-100700 -n functional-100700: exit status 2 (12.5206579s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0807 18:13:58.840906    7476 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-100700 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-100700 logs -n 25: (3m58.536043s)
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|------------|-----------------------------------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	|  Command   |                                                Args                                                 |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|------------|-----------------------------------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| config     | functional-100700 config get                                                                        | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:12 UTC |                     |
	|            | cpus                                                                                                |                   |                   |         |                     |                     |
	| config     | functional-100700 config set                                                                        | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:12 UTC | 07 Aug 24 18:12 UTC |
	|            | cpus 2                                                                                              |                   |                   |         |                     |                     |
	| config     | functional-100700 config get                                                                        | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:12 UTC | 07 Aug 24 18:12 UTC |
	|            | cpus                                                                                                |                   |                   |         |                     |                     |
	| config     | functional-100700 config unset                                                                      | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:12 UTC | 07 Aug 24 18:12 UTC |
	|            | cpus                                                                                                |                   |                   |         |                     |                     |
	| config     | functional-100700 config get                                                                        | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:12 UTC |                     |
	|            | cpus                                                                                                |                   |                   |         |                     |                     |
	| ssh        | functional-100700 ssh sudo cat                                                                      | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:12 UTC | 07 Aug 24 18:12 UTC |
	|            | /etc/test/nested/copy/9660/hosts                                                                    |                   |                   |         |                     |                     |
	| docker-env | functional-100700 docker-env                                                                        | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:12 UTC |                     |
	| ssh        | functional-100700 ssh sudo cat                                                                      | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:12 UTC | 07 Aug 24 18:12 UTC |
	|            | /etc/ssl/certs/9660.pem                                                                             |                   |                   |         |                     |                     |
	| ssh        | functional-100700 ssh cat                                                                           | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:12 UTC | 07 Aug 24 18:12 UTC |
	|            | /etc/hostname                                                                                       |                   |                   |         |                     |                     |
	| ssh        | functional-100700 ssh sudo cat                                                                      | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:12 UTC | 07 Aug 24 18:12 UTC |
	|            | /usr/share/ca-certificates/9660.pem                                                                 |                   |                   |         |                     |                     |
	| cp         | functional-100700 cp                                                                                | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:12 UTC | 07 Aug 24 18:12 UTC |
	|            | testdata\cp-test.txt                                                                                |                   |                   |         |                     |                     |
	|            | /home/docker/cp-test.txt                                                                            |                   |                   |         |                     |                     |
	| ssh        | functional-100700 ssh sudo cat                                                                      | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:12 UTC | 07 Aug 24 18:12 UTC |
	|            | /etc/ssl/certs/51391683.0                                                                           |                   |                   |         |                     |                     |
	| ssh        | functional-100700 ssh -n                                                                            | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:12 UTC | 07 Aug 24 18:12 UTC |
	|            | functional-100700 sudo cat                                                                          |                   |                   |         |                     |                     |
	|            | /home/docker/cp-test.txt                                                                            |                   |                   |         |                     |                     |
	| ssh        | functional-100700 ssh sudo cat                                                                      | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:12 UTC | 07 Aug 24 18:12 UTC |
	|            | /etc/ssl/certs/96602.pem                                                                            |                   |                   |         |                     |                     |
	| cp         | functional-100700 cp functional-100700:/home/docker/cp-test.txt                                     | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:12 UTC | 07 Aug 24 18:13 UTC |
	|            | C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalparallelCpCmd1063066128\001\cp-test.txt |                   |                   |         |                     |                     |
	| ssh        | functional-100700 ssh sudo cat                                                                      | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:12 UTC | 07 Aug 24 18:13 UTC |
	|            | /usr/share/ca-certificates/96602.pem                                                                |                   |                   |         |                     |                     |
	| ssh        | functional-100700 ssh -n                                                                            | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:13 UTC | 07 Aug 24 18:13 UTC |
	|            | functional-100700 sudo cat                                                                          |                   |                   |         |                     |                     |
	|            | /home/docker/cp-test.txt                                                                            |                   |                   |         |                     |                     |
	| ssh        | functional-100700 ssh sudo cat                                                                      | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:13 UTC | 07 Aug 24 18:13 UTC |
	|            | /etc/ssl/certs/3ec20f2e.0                                                                           |                   |                   |         |                     |                     |
	| cp         | functional-100700 cp                                                                                | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:13 UTC | 07 Aug 24 18:13 UTC |
	|            | testdata\cp-test.txt                                                                                |                   |                   |         |                     |                     |
	|            | /tmp/does/not/exist/cp-test.txt                                                                     |                   |                   |         |                     |                     |
	| ssh        | functional-100700 ssh -n                                                                            | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:13 UTC | 07 Aug 24 18:13 UTC |
	|            | functional-100700 sudo cat                                                                          |                   |                   |         |                     |                     |
	|            | /tmp/does/not/exist/cp-test.txt                                                                     |                   |                   |         |                     |                     |
	| tunnel     | functional-100700 tunnel                                                                            | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:13 UTC |                     |
	|            | --alsologtostderr                                                                                   |                   |                   |         |                     |                     |
	| tunnel     | functional-100700 tunnel                                                                            | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:13 UTC |                     |
	|            | --alsologtostderr                                                                                   |                   |                   |         |                     |                     |
	| tunnel     | functional-100700 tunnel                                                                            | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:13 UTC |                     |
	|            | --alsologtostderr                                                                                   |                   |                   |         |                     |                     |
	| addons     | functional-100700 addons list                                                                       | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:13 UTC | 07 Aug 24 18:13 UTC |
	| addons     | functional-100700 addons list                                                                       | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:13 UTC | 07 Aug 24 18:13 UTC |
	|            | -o json                                                                                             |                   |                   |         |                     |                     |
	|------------|-----------------------------------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/07 17:58:33
	Running on machine: minikube6
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0807 17:58:33.249534    2092 out.go:291] Setting OutFile to fd 728 ...
	I0807 17:58:33.250111    2092 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 17:58:33.250111    2092 out.go:304] Setting ErrFile to fd 800...
	I0807 17:58:33.250179    2092 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 17:58:33.269540    2092 out.go:298] Setting JSON to false
	I0807 17:58:33.272574    2092 start.go:129] hostinfo: {"hostname":"minikube6","uptime":315442,"bootTime":1722738070,"procs":188,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4717 Build 19045.4717","kernelVersion":"10.0.19045.4717 Build 19045.4717","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0807 17:58:33.272574    2092 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0807 17:58:33.277577    2092 out.go:177] * [functional-100700] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4717 Build 19045.4717
	I0807 17:58:33.281028    2092 notify.go:220] Checking for updates...
	I0807 17:58:33.284043    2092 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0807 17:58:33.286477    2092 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0807 17:58:33.289594    2092 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0807 17:58:33.292627    2092 out.go:177]   - MINIKUBE_LOCATION=19389
	I0807 17:58:33.295302    2092 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0807 17:58:33.298825    2092 config.go:182] Loaded profile config "functional-100700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 17:58:33.298825    2092 driver.go:392] Setting default libvirt URI to qemu:///system
	I0807 17:58:38.714558    2092 out.go:177] * Using the hyperv driver based on existing profile
	I0807 17:58:38.718761    2092 start.go:297] selected driver: hyperv
	I0807 17:58:38.718761    2092 start.go:901] validating driver "hyperv" against &{Name:functional-100700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.3 ClusterName:functional-100700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.235.211 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 17:58:38.718761    2092 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0807 17:58:38.771046    2092 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0807 17:58:38.771115    2092 cni.go:84] Creating CNI manager for ""
	I0807 17:58:38.771115    2092 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0807 17:58:38.771254    2092 start.go:340] cluster config:
	{Name:functional-100700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-100700 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.235.211 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 17:58:38.771533    2092 iso.go:125] acquiring lock: {Name:mk51465eaa337f49a286b30986b5f3d5f63e6787 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 17:58:38.776902    2092 out.go:177] * Starting "functional-100700" primary control-plane node in "functional-100700" cluster
	I0807 17:58:38.780865    2092 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0807 17:58:38.780865    2092 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0807 17:58:38.780865    2092 cache.go:56] Caching tarball of preloaded images
	I0807 17:58:38.780865    2092 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0807 17:58:38.780865    2092 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0807 17:58:38.781934    2092 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-100700\config.json ...
	I0807 17:58:38.783866    2092 start.go:360] acquireMachinesLock for functional-100700: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 17:58:38.783866    2092 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-100700"
	I0807 17:58:38.783866    2092 start.go:96] Skipping create...Using existing machine configuration
	I0807 17:58:38.783866    2092 fix.go:54] fixHost starting: 
	I0807 17:58:38.784885    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:58:41.603969    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:58:41.604807    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:58:41.604807    2092 fix.go:112] recreateIfNeeded on functional-100700: state=Running err=<nil>
	W0807 17:58:41.604807    2092 fix.go:138] unexpected machine state, will restart: <nil>
	I0807 17:58:41.608533    2092 out.go:177] * Updating the running hyperv "functional-100700" VM ...
	I0807 17:58:41.613016    2092 machine.go:94] provisionDockerMachine start ...
	I0807 17:58:41.613016    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:58:43.839989    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:58:43.839989    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:58:43.840252    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:58:46.452954    2092 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:58:46.452954    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:58:46.459781    2092 main.go:141] libmachine: Using SSH client type: native
	I0807 17:58:46.460474    2092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.235.211 22 <nil> <nil>}
	I0807 17:58:46.460474    2092 main.go:141] libmachine: About to run SSH command:
	hostname
	I0807 17:58:46.591805    2092 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-100700
	
	I0807 17:58:46.591805    2092 buildroot.go:166] provisioning hostname "functional-100700"
	I0807 17:58:46.591805    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:58:48.755211    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:58:48.755427    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:58:48.755465    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:58:51.336039    2092 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:58:51.336039    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:58:51.342623    2092 main.go:141] libmachine: Using SSH client type: native
	I0807 17:58:51.342623    2092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.235.211 22 <nil> <nil>}
	I0807 17:58:51.342623    2092 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-100700 && echo "functional-100700" | sudo tee /etc/hostname
	I0807 17:58:51.496578    2092 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-100700
	
	I0807 17:58:51.496578    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:58:53.691439    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:58:53.691439    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:58:53.691439    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:58:56.301964    2092 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:58:56.301964    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:58:56.307512    2092 main.go:141] libmachine: Using SSH client type: native
	I0807 17:58:56.308194    2092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.235.211 22 <nil> <nil>}
	I0807 17:58:56.308194    2092 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-100700' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-100700/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-100700' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0807 17:58:56.438766    2092 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0807 17:58:56.438766    2092 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0807 17:58:56.438900    2092 buildroot.go:174] setting up certificates
	I0807 17:58:56.438900    2092 provision.go:84] configureAuth start
	I0807 17:58:56.438900    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:58:58.655995    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:58:58.656961    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:58:58.657071    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:59:01.290427    2092 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:59:01.290427    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:01.290831    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:59:03.469316    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:59:03.469316    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:03.469551    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:59:06.075723    2092 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:59:06.075723    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:06.075723    2092 provision.go:143] copyHostCerts
	I0807 17:59:06.075723    2092 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0807 17:59:06.075723    2092 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0807 17:59:06.076549    2092 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0807 17:59:06.077992    2092 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0807 17:59:06.077992    2092 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0807 17:59:06.078322    2092 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0807 17:59:06.079146    2092 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0807 17:59:06.079146    2092 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0807 17:59:06.079980    2092 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0807 17:59:06.080688    2092 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-100700 san=[127.0.0.1 172.28.235.211 functional-100700 localhost minikube]
	I0807 17:59:06.262311    2092 provision.go:177] copyRemoteCerts
	I0807 17:59:06.274334    2092 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0807 17:59:06.274334    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:59:08.466099    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:59:08.466421    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:08.466494    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:59:11.061934    2092 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:59:11.061934    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:11.061934    2092 sshutil.go:53] new ssh client: &{IP:172.28.235.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-100700\id_rsa Username:docker}
	I0807 17:59:11.172314    2092 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8979173s)
	I0807 17:59:11.172848    2092 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0807 17:59:11.223362    2092 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0807 17:59:11.271809    2092 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0807 17:59:11.319487    2092 provision.go:87] duration metric: took 14.8803963s to configureAuth
	I0807 17:59:11.319487    2092 buildroot.go:189] setting minikube options for container-runtime
	I0807 17:59:11.320542    2092 config.go:182] Loaded profile config "functional-100700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 17:59:11.320588    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:59:13.493491    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:59:13.493491    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:13.493879    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:59:16.077977    2092 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:59:16.077977    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:16.088668    2092 main.go:141] libmachine: Using SSH client type: native
	I0807 17:59:16.088783    2092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.235.211 22 <nil> <nil>}
	I0807 17:59:16.088783    2092 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0807 17:59:16.217785    2092 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0807 17:59:16.217785    2092 buildroot.go:70] root file system type: tmpfs
	I0807 17:59:16.218443    2092 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0807 17:59:16.218443    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:59:18.421400    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:59:18.421400    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:18.421838    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:59:21.023576    2092 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:59:21.024581    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:21.030466    2092 main.go:141] libmachine: Using SSH client type: native
	I0807 17:59:21.031160    2092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.235.211 22 <nil> <nil>}
	I0807 17:59:21.031160    2092 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0807 17:59:21.200213    2092 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0807 17:59:21.200853    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:59:23.460840    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:59:23.460840    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:23.461413    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:59:26.144922    2092 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:59:26.144922    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:26.151032    2092 main.go:141] libmachine: Using SSH client type: native
	I0807 17:59:26.151032    2092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.235.211 22 <nil> <nil>}
	I0807 17:59:26.151032    2092 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0807 17:59:26.288578    2092 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0807 17:59:26.288578    2092 machine.go:97] duration metric: took 44.6749905s to provisionDockerMachine
	I0807 17:59:26.289136    2092 start.go:293] postStartSetup for "functional-100700" (driver="hyperv")
	I0807 17:59:26.289136    2092 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0807 17:59:26.303659    2092 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0807 17:59:26.303659    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:59:28.549324    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:59:28.550453    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:28.550627    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:59:31.258516    2092 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:59:31.258516    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:31.259399    2092 sshutil.go:53] new ssh client: &{IP:172.28.235.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-100700\id_rsa Username:docker}
	I0807 17:59:31.366436    2092 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.0626314s)
	I0807 17:59:31.378993    2092 ssh_runner.go:195] Run: cat /etc/os-release
	I0807 17:59:31.386284    2092 info.go:137] Remote host: Buildroot 2023.02.9
	I0807 17:59:31.386284    2092 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0807 17:59:31.386889    2092 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0807 17:59:31.387933    2092 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96602.pem -> 96602.pem in /etc/ssl/certs
	I0807 17:59:31.388907    2092 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\9660\hosts -> hosts in /etc/test/nested/copy/9660
	I0807 17:59:31.400272    2092 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/9660
	I0807 17:59:31.419285    2092 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96602.pem --> /etc/ssl/certs/96602.pem (1708 bytes)
	I0807 17:59:31.469876    2092 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\9660\hosts --> /etc/test/nested/copy/9660/hosts (40 bytes)
	I0807 17:59:31.522816    2092 start.go:296] duration metric: took 5.2336131s for postStartSetup
	I0807 17:59:31.522964    2092 fix.go:56] duration metric: took 52.7384232s for fixHost
	I0807 17:59:31.522964    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:59:33.801714    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:59:33.801714    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:33.802069    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:59:36.454493    2092 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:59:36.455616    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:36.460762    2092 main.go:141] libmachine: Using SSH client type: native
	I0807 17:59:36.461590    2092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.235.211 22 <nil> <nil>}
	I0807 17:59:36.461590    2092 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0807 17:59:36.584817    2092 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723053576.599616702
	
	I0807 17:59:36.584817    2092 fix.go:216] guest clock: 1723053576.599616702
	I0807 17:59:36.584817    2092 fix.go:229] Guest: 2024-08-07 17:59:36.599616702 +0000 UTC Remote: 2024-08-07 17:59:31.5229646 +0000 UTC m=+58.443653901 (delta=5.076652102s)
	I0807 17:59:36.584817    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:59:38.789376    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:59:38.789376    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:38.789376    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:59:41.408403    2092 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:59:41.408403    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:41.415021    2092 main.go:141] libmachine: Using SSH client type: native
	I0807 17:59:41.415132    2092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.235.211 22 <nil> <nil>}
	I0807 17:59:41.415132    2092 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1723053576
	I0807 17:59:41.554262    2092 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Aug  7 17:59:36 UTC 2024
	
	I0807 17:59:41.554342    2092 fix.go:236] clock set: Wed Aug  7 17:59:36 UTC 2024
	 (err=<nil>)
	I0807 17:59:41.554342    2092 start.go:83] releasing machines lock for "functional-100700", held for 1m2.7696728s
	I0807 17:59:41.554664    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:59:43.743629    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:59:43.743629    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:43.743690    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:59:46.402358    2092 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:59:46.402358    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:46.408259    2092 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0807 17:59:46.408354    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:59:46.418508    2092 ssh_runner.go:195] Run: cat /version.json
	I0807 17:59:46.418508    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:59:48.678664    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:59:48.678664    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:48.678947    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:59:48.679407    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:59:48.679407    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:48.679407    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:59:51.480814    2092 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:59:51.481012    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:51.481427    2092 sshutil.go:53] new ssh client: &{IP:172.28.235.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-100700\id_rsa Username:docker}
	I0807 17:59:51.507337    2092 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:59:51.507337    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:51.508062    2092 sshutil.go:53] new ssh client: &{IP:172.28.235.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-100700\id_rsa Username:docker}
	I0807 17:59:51.568433    2092 ssh_runner.go:235] Completed: cat /version.json: (5.149744s)
	I0807 17:59:51.580326    2092 ssh_runner.go:195] Run: systemctl --version
	I0807 17:59:51.588096    2092 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.1797709s)
	W0807 17:59:51.588200    2092 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0807 17:59:51.605332    2092 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0807 17:59:51.614105    2092 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0807 17:59:51.625622    2092 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0807 17:59:51.647469    2092 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0807 17:59:51.647469    2092 start.go:495] detecting cgroup driver to use...
	I0807 17:59:51.647469    2092 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0807 17:59:51.698634    2092 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	W0807 17:59:51.702217    2092 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0807 17:59:51.702712    2092 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0807 17:59:51.742110    2092 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0807 17:59:51.763294    2092 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0807 17:59:51.777226    2092 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0807 17:59:51.810899    2092 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0807 17:59:51.842846    2092 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0807 17:59:51.874465    2092 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0807 17:59:51.906374    2092 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0807 17:59:51.940856    2092 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0807 17:59:51.972522    2092 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0807 17:59:52.005392    2092 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0807 17:59:52.039394    2092 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0807 17:59:52.069956    2092 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0807 17:59:52.100774    2092 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 17:59:52.376248    2092 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0807 17:59:52.411639    2092 start.go:495] detecting cgroup driver to use...
	I0807 17:59:52.424848    2092 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0807 17:59:52.465672    2092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0807 17:59:52.507105    2092 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0807 17:59:52.559294    2092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0807 17:59:52.602621    2092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0807 17:59:52.628877    2092 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0807 17:59:52.677947    2092 ssh_runner.go:195] Run: which cri-dockerd
	I0807 17:59:52.696445    2092 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0807 17:59:52.713779    2092 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0807 17:59:52.759506    2092 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0807 17:59:53.063312    2092 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0807 17:59:53.341833    2092 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0807 17:59:53.341833    2092 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0807 17:59:53.390184    2092 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 17:59:53.669002    2092 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0807 18:01:05.110860    2092 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.440852s)
	I0807 18:01:05.123373    2092 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0807 18:01:05.210998    2092 out.go:177] 
	W0807 18:01:05.214928    2092 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 07 17:52:16 functional-100700 systemd[1]: Starting Docker Application Container Engine...
	Aug 07 17:52:16 functional-100700 dockerd[674]: time="2024-08-07T17:52:16.458341985Z" level=info msg="Starting up"
	Aug 07 17:52:16 functional-100700 dockerd[674]: time="2024-08-07T17:52:16.459483937Z" level=info msg="containerd not running, starting managed containerd"
	Aug 07 17:52:16 functional-100700 dockerd[674]: time="2024-08-07T17:52:16.460719594Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=680
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.493113277Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.523238457Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.523275259Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.523339562Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.523356263Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.523427766Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.523446067Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.523804083Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.523901688Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.523925089Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.523938689Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.524034194Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.524376109Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.527352746Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.527600157Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.528068478Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.528219485Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.528416294Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.528629904Z" level=info msg="metadata content store policy set" policy=shared
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.581871643Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.581973248Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.582080552Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.582105454Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.582123954Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.582283562Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.582887889Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583040296Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583147301Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583169102Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583185303Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583200004Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583228805Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583248006Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583336710Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583450015Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583471716Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583486017Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583527319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583544019Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583560020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583595322Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583708227Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583728328Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583743029Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583769930Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583818932Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583858834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583890635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583921137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583935837Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583972939Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.584001540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.584017041Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.584038742Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.584125046Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.584167548Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.584182949Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.584202250Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.584215250Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.584231651Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.584243051Z" level=info msg="NRI interface is disabled by configuration."
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.584478362Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.584610368Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.584865080Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.585009287Z" level=info msg="containerd successfully booted in 0.093145s"
	Aug 07 17:52:17 functional-100700 dockerd[674]: time="2024-08-07T17:52:17.539100050Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 07 17:52:17 functional-100700 dockerd[674]: time="2024-08-07T17:52:17.582623719Z" level=info msg="Loading containers: start."
	Aug 07 17:52:17 functional-100700 dockerd[674]: time="2024-08-07T17:52:17.757771440Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 07 17:52:17 functional-100700 dockerd[674]: time="2024-08-07T17:52:17.984107768Z" level=info msg="Loading containers: done."
	Aug 07 17:52:18 functional-100700 dockerd[674]: time="2024-08-07T17:52:18.005649861Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Aug 07 17:52:18 functional-100700 dockerd[674]: time="2024-08-07T17:52:18.006524396Z" level=info msg="Daemon has completed initialization"
	Aug 07 17:52:18 functional-100700 dockerd[674]: time="2024-08-07T17:52:18.114597281Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 07 17:52:18 functional-100700 dockerd[674]: time="2024-08-07T17:52:18.114734986Z" level=info msg="API listen on [::]:2376"
	Aug 07 17:52:18 functional-100700 systemd[1]: Started Docker Application Container Engine.
	Aug 07 17:52:49 functional-100700 systemd[1]: Stopping Docker Application Container Engine...
	Aug 07 17:52:49 functional-100700 dockerd[674]: time="2024-08-07T17:52:49.863488345Z" level=info msg="Processing signal 'terminated'"
	Aug 07 17:52:49 functional-100700 dockerd[674]: time="2024-08-07T17:52:49.866317260Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 07 17:52:49 functional-100700 dockerd[674]: time="2024-08-07T17:52:49.866749062Z" level=info msg="Daemon shutdown complete"
	Aug 07 17:52:49 functional-100700 dockerd[674]: time="2024-08-07T17:52:49.867048363Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 07 17:52:49 functional-100700 dockerd[674]: time="2024-08-07T17:52:49.867142864Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 07 17:52:50 functional-100700 systemd[1]: docker.service: Deactivated successfully.
	Aug 07 17:52:50 functional-100700 systemd[1]: Stopped Docker Application Container Engine.
	Aug 07 17:52:50 functional-100700 systemd[1]: Starting Docker Application Container Engine...
	Aug 07 17:52:50 functional-100700 dockerd[1088]: time="2024-08-07T17:52:50.924908641Z" level=info msg="Starting up"
	Aug 07 17:52:50 functional-100700 dockerd[1088]: time="2024-08-07T17:52:50.926025447Z" level=info msg="containerd not running, starting managed containerd"
	Aug 07 17:52:50 functional-100700 dockerd[1088]: time="2024-08-07T17:52:50.927064452Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1094
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.958194110Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.986326653Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.986364954Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.986401554Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.986436154Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.986479654Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.986495054Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.986720855Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.986821556Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.986844356Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.986855856Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.986880556Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.987134958Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.990330074Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.990438474Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.990847676Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.990948477Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.991014577Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.991067378Z" level=info msg="metadata content store policy set" policy=shared
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.991319879Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.991378979Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.991397879Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.991412779Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.991428979Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.991496580Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.992185983Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.992409884Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.992647286Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.992672686Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.992688386Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.992794286Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993149888Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993243389Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993271489Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993292489Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993307089Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993318789Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993338389Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993353889Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993377989Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993393189Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993409289Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993422089Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993433690Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993445490Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993457890Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993471990Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993490190Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993561890Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993582490Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993597890Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993619090Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993632991Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993644091Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993764891Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993878492Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993897692Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993910692Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.994016892Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.994112293Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.994155593Z" level=info msg="NRI interface is disabled by configuration."
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.994503995Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.994761996Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.994864197Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.995059098Z" level=info msg="containerd successfully booted in 0.037887s"
	Aug 07 17:52:51 functional-100700 dockerd[1088]: time="2024-08-07T17:52:51.976962789Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 07 17:52:52 functional-100700 dockerd[1088]: time="2024-08-07T17:52:52.014319979Z" level=info msg="Loading containers: start."
	Aug 07 17:52:52 functional-100700 dockerd[1088]: time="2024-08-07T17:52:52.153625988Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 07 17:52:52 functional-100700 dockerd[1088]: time="2024-08-07T17:52:52.280398732Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 07 17:52:52 functional-100700 dockerd[1088]: time="2024-08-07T17:52:52.383509956Z" level=info msg="Loading containers: done."
	Aug 07 17:52:52 functional-100700 dockerd[1088]: time="2024-08-07T17:52:52.407173376Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Aug 07 17:52:52 functional-100700 dockerd[1088]: time="2024-08-07T17:52:52.407304177Z" level=info msg="Daemon has completed initialization"
	Aug 07 17:52:52 functional-100700 dockerd[1088]: time="2024-08-07T17:52:52.460329447Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 07 17:52:52 functional-100700 dockerd[1088]: time="2024-08-07T17:52:52.460475947Z" level=info msg="API listen on [::]:2376"
	Aug 07 17:52:52 functional-100700 systemd[1]: Started Docker Application Container Engine.
	Aug 07 17:53:01 functional-100700 dockerd[1088]: time="2024-08-07T17:53:01.251394538Z" level=info msg="Processing signal 'terminated'"
	Aug 07 17:53:01 functional-100700 dockerd[1088]: time="2024-08-07T17:53:01.254204052Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 07 17:53:01 functional-100700 systemd[1]: Stopping Docker Application Container Engine...
	Aug 07 17:53:01 functional-100700 dockerd[1088]: time="2024-08-07T17:53:01.260155282Z" level=info msg="Daemon shutdown complete"
	Aug 07 17:53:01 functional-100700 dockerd[1088]: time="2024-08-07T17:53:01.260456984Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 07 17:53:01 functional-100700 dockerd[1088]: time="2024-08-07T17:53:01.260937686Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 07 17:53:02 functional-100700 systemd[1]: docker.service: Deactivated successfully.
	Aug 07 17:53:02 functional-100700 systemd[1]: Stopped Docker Application Container Engine.
	Aug 07 17:53:02 functional-100700 systemd[1]: Starting Docker Application Container Engine...
	Aug 07 17:53:02 functional-100700 dockerd[1438]: time="2024-08-07T17:53:02.321797079Z" level=info msg="Starting up"
	Aug 07 17:53:02 functional-100700 dockerd[1438]: time="2024-08-07T17:53:02.323692689Z" level=info msg="containerd not running, starting managed containerd"
	Aug 07 17:53:02 functional-100700 dockerd[1438]: time="2024-08-07T17:53:02.324967095Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1444
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.356801457Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.391549934Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.391620134Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.391684835Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.391704135Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.391750835Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.391768635Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.391946636Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.392117037Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.392142737Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.392156437Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.392185437Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.392310838Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.395280253Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.395439954Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.395604655Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.395701255Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.395733555Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.396053557Z" level=info msg="metadata content store policy set" policy=shared
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.396602860Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.396804261Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.396886161Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.396963461Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397040262Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397105662Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397382264Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397664265Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397760665Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397781966Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397796766Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397810066Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397829466Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397849466Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397864866Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397877866Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397890366Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397902266Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397930866Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398086167Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398131667Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398146767Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398159868Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398173168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398186368Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398199368Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398230968Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398291368Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398305068Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398318768Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398347068Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398379069Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398633370Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398774671Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398795271Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398837271Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.399058072Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.399114872Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.399134672Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.399145573Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.399188373Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.399202473Z" level=info msg="NRI interface is disabled by configuration."
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.399579475Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.399779376Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.399959877Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.400151978Z" level=info msg="containerd successfully booted in 0.045445s"
	Aug 07 17:53:03 functional-100700 dockerd[1438]: time="2024-08-07T17:53:03.371421015Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 07 17:53:06 functional-100700 dockerd[1438]: time="2024-08-07T17:53:06.638419724Z" level=info msg="Loading containers: start."
	Aug 07 17:53:06 functional-100700 dockerd[1438]: time="2024-08-07T17:53:06.762102252Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 07 17:53:06 functional-100700 dockerd[1438]: time="2024-08-07T17:53:06.878637045Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 07 17:53:06 functional-100700 dockerd[1438]: time="2024-08-07T17:53:06.979158756Z" level=info msg="Loading containers: done."
	Aug 07 17:53:07 functional-100700 dockerd[1438]: time="2024-08-07T17:53:07.006779696Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Aug 07 17:53:07 functional-100700 dockerd[1438]: time="2024-08-07T17:53:07.006939697Z" level=info msg="Daemon has completed initialization"
	Aug 07 17:53:07 functional-100700 dockerd[1438]: time="2024-08-07T17:53:07.050899521Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 07 17:53:07 functional-100700 dockerd[1438]: time="2024-08-07T17:53:07.051782725Z" level=info msg="API listen on [::]:2376"
	Aug 07 17:53:07 functional-100700 systemd[1]: Started Docker Application Container Engine.
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.457114123Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.457756947Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.457814849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.458624579Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.566929844Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.567055349Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.567075749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.567173853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.642620715Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.643094533Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.643146835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.647326788Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.647829406Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.647978912Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.648649436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.651222930Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.841899112Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.842260225Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.842357529Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.842730042Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.987249334Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.987542945Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.987581346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.987882057Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:17 functional-100700 dockerd[1444]: time="2024-08-07T17:53:17.071713287Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:17 functional-100700 dockerd[1444]: time="2024-08-07T17:53:17.072501915Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:17 functional-100700 dockerd[1444]: time="2024-08-07T17:53:17.072657920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:17 functional-100700 dockerd[1444]: time="2024-08-07T17:53:17.072778524Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:17 functional-100700 dockerd[1444]: time="2024-08-07T17:53:17.120557979Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:17 functional-100700 dockerd[1444]: time="2024-08-07T17:53:17.120838189Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:17 functional-100700 dockerd[1444]: time="2024-08-07T17:53:17.121035196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:17 functional-100700 dockerd[1444]: time="2024-08-07T17:53:17.121432210Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:39 functional-100700 dockerd[1444]: time="2024-08-07T17:53:39.836342825Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:39 functional-100700 dockerd[1444]: time="2024-08-07T17:53:39.836494127Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:39 functional-100700 dockerd[1444]: time="2024-08-07T17:53:39.836527028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:39 functional-100700 dockerd[1444]: time="2024-08-07T17:53:39.836919632Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:40 functional-100700 dockerd[1444]: time="2024-08-07T17:53:40.031505099Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:40 functional-100700 dockerd[1444]: time="2024-08-07T17:53:40.031971604Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:40 functional-100700 dockerd[1444]: time="2024-08-07T17:53:40.032036705Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:40 functional-100700 dockerd[1444]: time="2024-08-07T17:53:40.032230607Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:40 functional-100700 dockerd[1444]: time="2024-08-07T17:53:40.071740773Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:40 functional-100700 dockerd[1444]: time="2024-08-07T17:53:40.071807874Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:40 functional-100700 dockerd[1444]: time="2024-08-07T17:53:40.071821974Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:40 functional-100700 dockerd[1444]: time="2024-08-07T17:53:40.072043276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:40 functional-100700 dockerd[1444]: time="2024-08-07T17:53:40.388937110Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:40 functional-100700 dockerd[1444]: time="2024-08-07T17:53:40.389400016Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:40 functional-100700 dockerd[1444]: time="2024-08-07T17:53:40.389566918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:40 functional-100700 dockerd[1444]: time="2024-08-07T17:53:40.390025223Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:41 functional-100700 dockerd[1444]: time="2024-08-07T17:53:41.011330360Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:41 functional-100700 dockerd[1444]: time="2024-08-07T17:53:41.011407583Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:41 functional-100700 dockerd[1444]: time="2024-08-07T17:53:41.013870604Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:41 functional-100700 dockerd[1444]: time="2024-08-07T17:53:41.017327916Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:41 functional-100700 dockerd[1444]: time="2024-08-07T17:53:41.053458090Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:41 functional-100700 dockerd[1444]: time="2024-08-07T17:53:41.053872712Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:41 functional-100700 dockerd[1444]: time="2024-08-07T17:53:41.054119484Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:41 functional-100700 dockerd[1444]: time="2024-08-07T17:53:41.054800983Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:48 functional-100700 dockerd[1444]: time="2024-08-07T17:53:48.064470635Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:48 functional-100700 dockerd[1444]: time="2024-08-07T17:53:48.064566761Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:48 functional-100700 dockerd[1444]: time="2024-08-07T17:53:48.064584666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:48 functional-100700 dockerd[1444]: time="2024-08-07T17:53:48.064692294Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:48 functional-100700 dockerd[1444]: time="2024-08-07T17:53:48.363127500Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:48 functional-100700 dockerd[1444]: time="2024-08-07T17:53:48.363436282Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:48 functional-100700 dockerd[1444]: time="2024-08-07T17:53:48.363741263Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:48 functional-100700 dockerd[1444]: time="2024-08-07T17:53:48.364157974Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:51 functional-100700 dockerd[1438]: time="2024-08-07T17:53:51.247657321Z" level=info msg="ignoring event" container=9d5becd51e7186ec4acc242dcdf88a4327987ec048e0ee52430eac221c2fc0fe module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:53:51 functional-100700 dockerd[1444]: time="2024-08-07T17:53:51.250057633Z" level=info msg="shim disconnected" id=9d5becd51e7186ec4acc242dcdf88a4327987ec048e0ee52430eac221c2fc0fe namespace=moby
	Aug 07 17:53:51 functional-100700 dockerd[1444]: time="2024-08-07T17:53:51.250177263Z" level=warning msg="cleaning up after shim disconnected" id=9d5becd51e7186ec4acc242dcdf88a4327987ec048e0ee52430eac221c2fc0fe namespace=moby
	Aug 07 17:53:51 functional-100700 dockerd[1444]: time="2024-08-07T17:53:51.250194468Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:53:51 functional-100700 dockerd[1438]: time="2024-08-07T17:53:51.423182591Z" level=info msg="ignoring event" container=2a2add9d4c7dacc6804bc6f6e21cf8821ae80a5ef75964e66784019654fe3bc3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:53:51 functional-100700 dockerd[1444]: time="2024-08-07T17:53:51.423885070Z" level=info msg="shim disconnected" id=2a2add9d4c7dacc6804bc6f6e21cf8821ae80a5ef75964e66784019654fe3bc3 namespace=moby
	Aug 07 17:53:51 functional-100700 dockerd[1444]: time="2024-08-07T17:53:51.424164141Z" level=warning msg="cleaning up after shim disconnected" id=2a2add9d4c7dacc6804bc6f6e21cf8821ae80a5ef75964e66784019654fe3bc3 namespace=moby
	Aug 07 17:53:51 functional-100700 dockerd[1444]: time="2024-08-07T17:53:51.424226557Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:42 functional-100700 systemd[1]: Stopping Docker Application Container Engine...
	Aug 07 17:55:42 functional-100700 dockerd[1438]: time="2024-08-07T17:55:42.811970533Z" level=info msg="Processing signal 'terminated'"
	Aug 07 17:55:43 functional-100700 dockerd[1438]: time="2024-08-07T17:55:43.080390772Z" level=info msg="ignoring event" container=f87ac0281bc2649782f39abdff8151e0c016bf26b3182a3df5c8579e21fe65d7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.081622815Z" level=info msg="shim disconnected" id=f87ac0281bc2649782f39abdff8151e0c016bf26b3182a3df5c8579e21fe65d7 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.082180634Z" level=warning msg="cleaning up after shim disconnected" id=f87ac0281bc2649782f39abdff8151e0c016bf26b3182a3df5c8579e21fe65d7 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.082393841Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1438]: time="2024-08-07T17:55:43.098416193Z" level=info msg="ignoring event" container=b9283200bae35965a0ee1c5549a6004ce0c7d70f70b9a374e41cd065d4d5dec3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.099709637Z" level=info msg="shim disconnected" id=b9283200bae35965a0ee1c5549a6004ce0c7d70f70b9a374e41cd065d4d5dec3 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.099819341Z" level=warning msg="cleaning up after shim disconnected" id=b9283200bae35965a0ee1c5549a6004ce0c7d70f70b9a374e41cd065d4d5dec3 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.099888243Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.121832799Z" level=info msg="shim disconnected" id=03079679d68cc19538e02f39c48251b50393f7dd981e609af3dcc96c420cd29e namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.121978304Z" level=warning msg="cleaning up after shim disconnected" id=03079679d68cc19538e02f39c48251b50393f7dd981e609af3dcc96c420cd29e namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.122200511Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.122413919Z" level=info msg="shim disconnected" id=1ca5873cb027b5d4205ca4cdff1109b4f47c0b8c5f18ae986a0b932c8d9584f4 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.122477321Z" level=warning msg="cleaning up after shim disconnected" id=1ca5873cb027b5d4205ca4cdff1109b4f47c0b8c5f18ae986a0b932c8d9584f4 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.122491521Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1438]: time="2024-08-07T17:55:43.122750830Z" level=info msg="ignoring event" container=1ca5873cb027b5d4205ca4cdff1109b4f47c0b8c5f18ae986a0b932c8d9584f4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:55:43 functional-100700 dockerd[1438]: time="2024-08-07T17:55:43.122822433Z" level=info msg="ignoring event" container=03079679d68cc19538e02f39c48251b50393f7dd981e609af3dcc96c420cd29e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.132577069Z" level=info msg="shim disconnected" id=9f7b90986285c55035e0f34c86f5eaccab75a819c487d2fb0ea04853921c13bf namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.132832477Z" level=warning msg="cleaning up after shim disconnected" id=9f7b90986285c55035e0f34c86f5eaccab75a819c487d2fb0ea04853921c13bf namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.132974882Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.138866285Z" level=info msg="shim disconnected" id=f907706c00ebe4149920447890732b438dd707eb66023282c7b66ac64e2185b5 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.139038791Z" level=warning msg="cleaning up after shim disconnected" id=f907706c00ebe4149920447890732b438dd707eb66023282c7b66ac64e2185b5 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.139187396Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.155113444Z" level=info msg="shim disconnected" id=a334b1535e2f7b257588b80636584105387ba212b623a89972b0a362d62c0504 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.155224248Z" level=warning msg="cleaning up after shim disconnected" id=a334b1535e2f7b257588b80636584105387ba212b623a89972b0a362d62c0504 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.155238049Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1438]: time="2024-08-07T17:55:43.155389654Z" level=info msg="ignoring event" container=f907706c00ebe4149920447890732b438dd707eb66023282c7b66ac64e2185b5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:55:43 functional-100700 dockerd[1438]: time="2024-08-07T17:55:43.155569360Z" level=info msg="ignoring event" container=9f7b90986285c55035e0f34c86f5eaccab75a819c487d2fb0ea04853921c13bf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:55:43 functional-100700 dockerd[1438]: time="2024-08-07T17:55:43.155886271Z" level=info msg="ignoring event" container=a334b1535e2f7b257588b80636584105387ba212b623a89972b0a362d62c0504 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.169713847Z" level=info msg="shim disconnected" id=32e3bea2a931545b3b3164d713e35af3f8439f208d9a2909552dc971a61ca84a namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.170314567Z" level=warning msg="cleaning up after shim disconnected" id=32e3bea2a931545b3b3164d713e35af3f8439f208d9a2909552dc971a61ca84a namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.170672780Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1438]: time="2024-08-07T17:55:43.183709228Z" level=info msg="ignoring event" container=32e3bea2a931545b3b3164d713e35af3f8439f208d9a2909552dc971a61ca84a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:55:43 functional-100700 dockerd[1438]: time="2024-08-07T17:55:43.186931639Z" level=info msg="ignoring event" container=8e6d65d222ddafeb197a6bdeeb312ce4cb757e9ed4da126acc744cc3b9f1550e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.187130746Z" level=info msg="shim disconnected" id=8e6d65d222ddafeb197a6bdeeb312ce4cb757e9ed4da126acc744cc3b9f1550e namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.187455057Z" level=warning msg="cleaning up after shim disconnected" id=8e6d65d222ddafeb197a6bdeeb312ce4cb757e9ed4da126acc744cc3b9f1550e namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.187626563Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.203552411Z" level=info msg="shim disconnected" id=6f09e3713754243813d9c0717cdb77e8bedfc4c511d31490f7ba369a8d1bcc06 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.204052129Z" level=warning msg="cleaning up after shim disconnected" id=6f09e3713754243813d9c0717cdb77e8bedfc4c511d31490f7ba369a8d1bcc06 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.204312938Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1438]: time="2024-08-07T17:55:43.210829062Z" level=info msg="ignoring event" container=88ef6e03a7d49636d4ec885944dcc92b4c233c0740da2e8f511ce9078e817eb9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:55:43 functional-100700 dockerd[1438]: time="2024-08-07T17:55:43.210944966Z" level=info msg="ignoring event" container=6f09e3713754243813d9c0717cdb77e8bedfc4c511d31490f7ba369a8d1bcc06 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.210804861Z" level=info msg="shim disconnected" id=88ef6e03a7d49636d4ec885944dcc92b4c233c0740da2e8f511ce9078e817eb9 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.211582688Z" level=warning msg="cleaning up after shim disconnected" id=88ef6e03a7d49636d4ec885944dcc92b4c233c0740da2e8f511ce9078e817eb9 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.211710392Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.243702093Z" level=info msg="shim disconnected" id=4f7e1db775dc208f8832a04d1b684f4ab8f803d4f68869549effeeb024f11b10 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1438]: time="2024-08-07T17:55:43.244412118Z" level=info msg="ignoring event" container=4f7e1db775dc208f8832a04d1b684f4ab8f803d4f68869549effeeb024f11b10 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.248240650Z" level=warning msg="cleaning up after shim disconnected" id=4f7e1db775dc208f8832a04d1b684f4ab8f803d4f68869549effeeb024f11b10 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.248409455Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:47 functional-100700 dockerd[1438]: time="2024-08-07T17:55:47.969145341Z" level=info msg="ignoring event" container=8257548df8d0db4a3326d585a67cf2b31ec7c8b63a2fb41d0009d9c5fa124c3b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:55:47 functional-100700 dockerd[1444]: time="2024-08-07T17:55:47.970761397Z" level=info msg="shim disconnected" id=8257548df8d0db4a3326d585a67cf2b31ec7c8b63a2fb41d0009d9c5fa124c3b namespace=moby
	Aug 07 17:55:47 functional-100700 dockerd[1444]: time="2024-08-07T17:55:47.971219213Z" level=warning msg="cleaning up after shim disconnected" id=8257548df8d0db4a3326d585a67cf2b31ec7c8b63a2fb41d0009d9c5fa124c3b namespace=moby
	Aug 07 17:55:47 functional-100700 dockerd[1444]: time="2024-08-07T17:55:47.973093477Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:52 functional-100700 dockerd[1438]: time="2024-08-07T17:55:52.961783600Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=76120dfe1c32eb89c2bb01fc16ebaad087df9e517d03ed5169fb50db579cfe65
	Aug 07 17:55:53 functional-100700 dockerd[1444]: time="2024-08-07T17:55:53.004669561Z" level=info msg="shim disconnected" id=76120dfe1c32eb89c2bb01fc16ebaad087df9e517d03ed5169fb50db579cfe65 namespace=moby
	Aug 07 17:55:53 functional-100700 dockerd[1444]: time="2024-08-07T17:55:53.004855260Z" level=warning msg="cleaning up after shim disconnected" id=76120dfe1c32eb89c2bb01fc16ebaad087df9e517d03ed5169fb50db579cfe65 namespace=moby
	Aug 07 17:55:53 functional-100700 dockerd[1444]: time="2024-08-07T17:55:53.004935360Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:53 functional-100700 dockerd[1438]: time="2024-08-07T17:55:53.005290559Z" level=info msg="ignoring event" container=76120dfe1c32eb89c2bb01fc16ebaad087df9e517d03ed5169fb50db579cfe65 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:55:53 functional-100700 dockerd[1438]: time="2024-08-07T17:55:53.082366606Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 07 17:55:53 functional-100700 dockerd[1438]: time="2024-08-07T17:55:53.082484305Z" level=info msg="Daemon shutdown complete"
	Aug 07 17:55:53 functional-100700 dockerd[1438]: time="2024-08-07T17:55:53.082557905Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 07 17:55:53 functional-100700 dockerd[1438]: time="2024-08-07T17:55:53.082900804Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 07 17:55:54 functional-100700 systemd[1]: docker.service: Deactivated successfully.
	Aug 07 17:55:54 functional-100700 systemd[1]: Stopped Docker Application Container Engine.
	Aug 07 17:55:54 functional-100700 systemd[1]: docker.service: Consumed 6.104s CPU time.
	Aug 07 17:55:54 functional-100700 systemd[1]: Starting Docker Application Container Engine...
	Aug 07 17:55:54 functional-100700 dockerd[4430]: time="2024-08-07T17:55:54.151963549Z" level=info msg="Starting up"
	Aug 07 17:55:54 functional-100700 dockerd[4430]: time="2024-08-07T17:55:54.153307848Z" level=info msg="containerd not running, starting managed containerd"
	Aug 07 17:55:54 functional-100700 dockerd[4430]: time="2024-08-07T17:55:54.154476946Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=4437
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.189116214Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.217487588Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.217672488Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.217724888Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.217743588Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.217923788Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.218099587Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.218341687Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.218462487Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.218487087Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.218500687Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.218536087Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.218676087Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.221803184Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.221937684Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.222170584Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.222318784Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.222351783Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.222377583Z" level=info msg="metadata content store policy set" policy=shared
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.222707583Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.222769383Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.222791383Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.222824883Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.222843583Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.223037183Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.223447282Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.223700782Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224128482Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224153982Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224211182Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224225882Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224249282Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224283382Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224302582Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224321982Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224336982Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224348882Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224375782Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224415382Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224431982Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224446182Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224464282Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224487782Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224502981Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224516181Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224556381Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224572481Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224585181Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224597481Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224611681Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224628781Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224653481Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224668081Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224681281Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.225080281Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.225134081Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.225150081Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.225165281Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.225176081Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.225190081Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.225200981Z" level=info msg="NRI interface is disabled by configuration."
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.225570080Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.225664580Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.225760880Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.225784780Z" level=info msg="containerd successfully booted in 0.038263s"
	Aug 07 17:55:55 functional-100700 dockerd[4430]: time="2024-08-07T17:55:55.203623721Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 07 17:55:55 functional-100700 dockerd[4430]: time="2024-08-07T17:55:55.249075279Z" level=info msg="Loading containers: start."
	Aug 07 17:55:55 functional-100700 dockerd[4430]: time="2024-08-07T17:55:55.486283283Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 07 17:55:55 functional-100700 dockerd[4430]: time="2024-08-07T17:55:55.611181043Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 07 17:55:55 functional-100700 dockerd[4430]: time="2024-08-07T17:55:55.728226393Z" level=info msg="Loading containers: done."
	Aug 07 17:55:55 functional-100700 dockerd[4430]: time="2024-08-07T17:55:55.754038026Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Aug 07 17:55:55 functional-100700 dockerd[4430]: time="2024-08-07T17:55:55.754150926Z" level=info msg="Daemon has completed initialization"
	Aug 07 17:55:55 functional-100700 dockerd[4430]: time="2024-08-07T17:55:55.805054292Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 07 17:55:55 functional-100700 systemd[1]: Started Docker Application Container Engine.
	Aug 07 17:55:55 functional-100700 dockerd[4430]: time="2024-08-07T17:55:55.817756408Z" level=info msg="API listen on [::]:2376"
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.526494067Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.526577168Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.526591169Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.526693470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.558712297Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.563728963Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.563952066Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.564587075Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.615923059Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.616533767Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.616742570Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.616989273Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.649839111Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.650906025Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.651080528Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.651280130Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.002162008Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.002309810Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.002327411Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.002784217Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.146319020Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.146402021Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.146419521Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.146546323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.186224804Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.186289605Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.186312905Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.186429907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.293246071Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.293400074Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.293416674Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.293513875Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:08 functional-100700 dockerd[4437]: time="2024-08-07T17:56:08.342920003Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:08 functional-100700 dockerd[4437]: time="2024-08-07T17:56:08.345412453Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:08 functional-100700 dockerd[4437]: time="2024-08-07T17:56:08.345440953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:08 functional-100700 dockerd[4437]: time="2024-08-07T17:56:08.346309071Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:08 functional-100700 dockerd[4437]: time="2024-08-07T17:56:08.427619805Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:08 functional-100700 dockerd[4437]: time="2024-08-07T17:56:08.427935011Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:08 functional-100700 dockerd[4437]: time="2024-08-07T17:56:08.427958412Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:08 functional-100700 dockerd[4437]: time="2024-08-07T17:56:08.428175716Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:08 functional-100700 dockerd[4437]: time="2024-08-07T17:56:08.450251060Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:08 functional-100700 dockerd[4437]: time="2024-08-07T17:56:08.450326762Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:08 functional-100700 dockerd[4437]: time="2024-08-07T17:56:08.450344662Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:08 functional-100700 dockerd[4437]: time="2024-08-07T17:56:08.450438364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:09 functional-100700 dockerd[4437]: time="2024-08-07T17:56:09.021378960Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:09 functional-100700 dockerd[4437]: time="2024-08-07T17:56:09.021447242Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:09 functional-100700 dockerd[4437]: time="2024-08-07T17:56:09.021467036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:09 functional-100700 dockerd[4437]: time="2024-08-07T17:56:09.021664985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:09 functional-100700 dockerd[4437]: time="2024-08-07T17:56:09.032269201Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:09 functional-100700 dockerd[4437]: time="2024-08-07T17:56:09.032481345Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:09 functional-100700 dockerd[4437]: time="2024-08-07T17:56:09.033742514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:09 functional-100700 dockerd[4437]: time="2024-08-07T17:56:09.034300967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:09 functional-100700 dockerd[4437]: time="2024-08-07T17:56:09.230710505Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:09 functional-100700 dockerd[4437]: time="2024-08-07T17:56:09.231303050Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:09 functional-100700 dockerd[4437]: time="2024-08-07T17:56:09.231404523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:09 functional-100700 dockerd[4437]: time="2024-08-07T17:56:09.231887696Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:59:53 functional-100700 systemd[1]: Stopping Docker Application Container Engine...
	Aug 07 17:59:53 functional-100700 dockerd[4430]: time="2024-08-07T17:59:53.701240101Z" level=info msg="Processing signal 'terminated'"
	Aug 07 17:59:53 functional-100700 dockerd[4430]: time="2024-08-07T17:59:53.891480424Z" level=info msg="ignoring event" container=011ec8239aa1c51c9f4776f7bfc897d8a3292c8e36986f970b1e2c2ca9da86fd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:59:53 functional-100700 dockerd[4430]: time="2024-08-07T17:59:53.891549927Z" level=info msg="ignoring event" container=b39dd511076447f842f8327dca684bf0a48d714c0f66e22029b88d889e068b15 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.892313355Z" level=info msg="shim disconnected" id=b39dd511076447f842f8327dca684bf0a48d714c0f66e22029b88d889e068b15 namespace=moby
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.892402158Z" level=warning msg="cleaning up after shim disconnected" id=b39dd511076447f842f8327dca684bf0a48d714c0f66e22029b88d889e068b15 namespace=moby
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.892417259Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.892816273Z" level=info msg="shim disconnected" id=011ec8239aa1c51c9f4776f7bfc897d8a3292c8e36986f970b1e2c2ca9da86fd namespace=moby
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.893079383Z" level=warning msg="cleaning up after shim disconnected" id=011ec8239aa1c51c9f4776f7bfc897d8a3292c8e36986f970b1e2c2ca9da86fd namespace=moby
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.893229989Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:59:53 functional-100700 dockerd[4430]: time="2024-08-07T17:59:53.943367240Z" level=info msg="ignoring event" container=333ea0f6bde6b1ef97dfdfac80a9d448a8898827c7df556176cd4ea6cc523a33 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.943759054Z" level=info msg="shim disconnected" id=333ea0f6bde6b1ef97dfdfac80a9d448a8898827c7df556176cd4ea6cc523a33 namespace=moby
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.943822956Z" level=warning msg="cleaning up after shim disconnected" id=333ea0f6bde6b1ef97dfdfac80a9d448a8898827c7df556176cd4ea6cc523a33 namespace=moby
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.943835757Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.963273574Z" level=info msg="shim disconnected" id=ee73743ff4167ab913696e97e9a77ab9a2b23dba89c1348473bb28a7089d45c2 namespace=moby
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.963426780Z" level=warning msg="cleaning up after shim disconnected" id=ee73743ff4167ab913696e97e9a77ab9a2b23dba89c1348473bb28a7089d45c2 namespace=moby
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.963795094Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:59:53 functional-100700 dockerd[4430]: time="2024-08-07T17:59:53.980683817Z" level=info msg="ignoring event" container=ee73743ff4167ab913696e97e9a77ab9a2b23dba89c1348473bb28a7089d45c2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:59:53 functional-100700 dockerd[4430]: time="2024-08-07T17:59:53.981327041Z" level=info msg="ignoring event" container=d57a72e940a3aff58bc3f9442b486233311d0f1b0b85603bed6541d03b52640b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:59:53 functional-100700 dockerd[4430]: time="2024-08-07T17:59:53.981517248Z" level=info msg="ignoring event" container=ff33a3021f789dbad1bd68bc673700c1dc540ec4a83d74c98c4915f4c66932e7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.983088406Z" level=info msg="shim disconnected" id=d57a72e940a3aff58bc3f9442b486233311d0f1b0b85603bed6541d03b52640b namespace=moby
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.983163809Z" level=warning msg="cleaning up after shim disconnected" id=d57a72e940a3aff58bc3f9442b486233311d0f1b0b85603bed6541d03b52640b namespace=moby
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.983176709Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4430]: time="2024-08-07T17:59:54.002058106Z" level=info msg="ignoring event" container=60d38309b3f444639ea52ae1b7c219816eaed13bbb7eb6ef22b5ecb5ee8fc804 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.002564025Z" level=info msg="shim disconnected" id=60d38309b3f444639ea52ae1b7c219816eaed13bbb7eb6ef22b5ecb5ee8fc804 namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.003062843Z" level=warning msg="cleaning up after shim disconnected" id=60d38309b3f444639ea52ae1b7c219816eaed13bbb7eb6ef22b5ecb5ee8fc804 namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.009295273Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.013796640Z" level=info msg="shim disconnected" id=623399e23aa3ba701f8a621854544b48498da36bba773f8911042b6d2561b57c namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.019740659Z" level=warning msg="cleaning up after shim disconnected" id=623399e23aa3ba701f8a621854544b48498da36bba773f8911042b6d2561b57c namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.019785161Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.008834456Z" level=info msg="shim disconnected" id=ff33a3021f789dbad1bd68bc673700c1dc540ec4a83d74c98c4915f4c66932e7 namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.021492824Z" level=warning msg="cleaning up after shim disconnected" id=ff33a3021f789dbad1bd68bc673700c1dc540ec4a83d74c98c4915f4c66932e7 namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.021550026Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4430]: time="2024-08-07T17:59:54.031232683Z" level=info msg="ignoring event" container=a781cd4bdb89586cb36144096a96bd794ac5f7417bddb500925aa03f7a83ee76 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:59:54 functional-100700 dockerd[4430]: time="2024-08-07T17:59:54.031289685Z" level=info msg="ignoring event" container=241a3e17f71d6549f6693cc1faa13d1a82113d0126d862a38ff582ea671e7704 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:59:54 functional-100700 dockerd[4430]: time="2024-08-07T17:59:54.031323187Z" level=info msg="ignoring event" container=0a22b12bc48412adcefcda1aa5f0176acba34d5ef617a5fc6e81727d4bf2fb9f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:59:54 functional-100700 dockerd[4430]: time="2024-08-07T17:59:54.031338987Z" level=info msg="ignoring event" container=125a3650c7bb9ceb749c5379c1700ef34f0671035364a39c7f8a55a9e244527f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:59:54 functional-100700 dockerd[4430]: time="2024-08-07T17:59:54.031357688Z" level=info msg="ignoring event" container=623399e23aa3ba701f8a621854544b48498da36bba773f8911042b6d2561b57c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.016580642Z" level=info msg="shim disconnected" id=a781cd4bdb89586cb36144096a96bd794ac5f7417bddb500925aa03f7a83ee76 namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.033182755Z" level=info msg="shim disconnected" id=125a3650c7bb9ceb749c5379c1700ef34f0671035364a39c7f8a55a9e244527f namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.035991259Z" level=warning msg="cleaning up after shim disconnected" id=125a3650c7bb9ceb749c5379c1700ef34f0671035364a39c7f8a55a9e244527f namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.036075162Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.036361773Z" level=warning msg="cleaning up after shim disconnected" id=a781cd4bdb89586cb36144096a96bd794ac5f7417bddb500925aa03f7a83ee76 namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.036396774Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.015009784Z" level=info msg="shim disconnected" id=241a3e17f71d6549f6693cc1faa13d1a82113d0126d862a38ff582ea671e7704 namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.040268717Z" level=warning msg="cleaning up after shim disconnected" id=241a3e17f71d6549f6693cc1faa13d1a82113d0126d862a38ff582ea671e7704 namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.040483025Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.017838089Z" level=info msg="shim disconnected" id=0a22b12bc48412adcefcda1aa5f0176acba34d5ef617a5fc6e81727d4bf2fb9f namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.056017998Z" level=warning msg="cleaning up after shim disconnected" id=0a22b12bc48412adcefcda1aa5f0176acba34d5ef617a5fc6e81727d4bf2fb9f namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.056073300Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:59:58 functional-100700 dockerd[4430]: time="2024-08-07T17:59:58.843549639Z" level=info msg="ignoring event" container=3bc20896cf9b1f1d0380b658e8e185cf6d0bedfce8143c9b50eb86a711c71a45 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:59:58 functional-100700 dockerd[4437]: time="2024-08-07T17:59:58.844293967Z" level=info msg="shim disconnected" id=3bc20896cf9b1f1d0380b658e8e185cf6d0bedfce8143c9b50eb86a711c71a45 namespace=moby
	Aug 07 17:59:58 functional-100700 dockerd[4437]: time="2024-08-07T17:59:58.844701482Z" level=warning msg="cleaning up after shim disconnected" id=3bc20896cf9b1f1d0380b658e8e185cf6d0bedfce8143c9b50eb86a711c71a45 namespace=moby
	Aug 07 17:59:58 functional-100700 dockerd[4437]: time="2024-08-07T17:59:58.845283503Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 18:00:03 functional-100700 dockerd[4430]: time="2024-08-07T18:00:03.891107278Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=ceb9a86ed09ccf71c371818d759e450667e70979c75506a5233093a4e3efe077
	Aug 07 18:00:03 functional-100700 dockerd[4430]: time="2024-08-07T18:00:03.954477534Z" level=info msg="ignoring event" container=ceb9a86ed09ccf71c371818d759e450667e70979c75506a5233093a4e3efe077 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 18:00:03 functional-100700 dockerd[4437]: time="2024-08-07T18:00:03.957207011Z" level=info msg="shim disconnected" id=ceb9a86ed09ccf71c371818d759e450667e70979c75506a5233093a4e3efe077 namespace=moby
	Aug 07 18:00:03 functional-100700 dockerd[4437]: time="2024-08-07T18:00:03.957346010Z" level=warning msg="cleaning up after shim disconnected" id=ceb9a86ed09ccf71c371818d759e450667e70979c75506a5233093a4e3efe077 namespace=moby
	Aug 07 18:00:03 functional-100700 dockerd[4437]: time="2024-08-07T18:00:03.957506408Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 18:00:04 functional-100700 dockerd[4430]: time="2024-08-07T18:00:04.021732016Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 07 18:00:04 functional-100700 dockerd[4430]: time="2024-08-07T18:00:04.022302513Z" level=info msg="Daemon shutdown complete"
	Aug 07 18:00:04 functional-100700 dockerd[4430]: time="2024-08-07T18:00:04.022522311Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 07 18:00:04 functional-100700 dockerd[4430]: time="2024-08-07T18:00:04.022549211Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 07 18:00:05 functional-100700 systemd[1]: docker.service: Deactivated successfully.
	Aug 07 18:00:05 functional-100700 systemd[1]: Stopped Docker Application Container Engine.
	Aug 07 18:00:05 functional-100700 systemd[1]: docker.service: Consumed 8.144s CPU time.
	Aug 07 18:00:05 functional-100700 systemd[1]: Starting Docker Application Container Engine...
	Aug 07 18:00:05 functional-100700 dockerd[8031]: time="2024-08-07T18:00:05.087273572Z" level=info msg="Starting up"
	Aug 07 18:01:05 functional-100700 dockerd[8031]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 07 18:01:05 functional-100700 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 07 18:01:05 functional-100700 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 07 18:01:05 functional-100700 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0807 18:01:05.216074    2092 out.go:239] * 
	W0807 18:01:05.217856    2092 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0807 18:01:05.222151    2092 out.go:177] 
	
	
	==> Docker <==
	Aug 07 18:17:08 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:17:08Z" level=error msg="error getting RW layer size for container ID '623399e23aa3ba701f8a621854544b48498da36bba773f8911042b6d2561b57c': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/623399e23aa3ba701f8a621854544b48498da36bba773f8911042b6d2561b57c/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:17:08 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:17:08Z" level=error msg="Set backoffDuration to : 1m0s for container ID '623399e23aa3ba701f8a621854544b48498da36bba773f8911042b6d2561b57c'"
	Aug 07 18:17:08 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:17:08Z" level=error msg="error getting RW layer size for container ID '8257548df8d0db4a3326d585a67cf2b31ec7c8b63a2fb41d0009d9c5fa124c3b': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/8257548df8d0db4a3326d585a67cf2b31ec7c8b63a2fb41d0009d9c5fa124c3b/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:17:08 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:17:08Z" level=error msg="Set backoffDuration to : 1m0s for container ID '8257548df8d0db4a3326d585a67cf2b31ec7c8b63a2fb41d0009d9c5fa124c3b'"
	Aug 07 18:17:08 functional-100700 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 07 18:17:08 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:17:08Z" level=error msg="error getting RW layer size for container ID '1ca5873cb027b5d4205ca4cdff1109b4f47c0b8c5f18ae986a0b932c8d9584f4': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/1ca5873cb027b5d4205ca4cdff1109b4f47c0b8c5f18ae986a0b932c8d9584f4/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:17:08 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:17:08Z" level=error msg="Set backoffDuration to : 1m0s for container ID '1ca5873cb027b5d4205ca4cdff1109b4f47c0b8c5f18ae986a0b932c8d9584f4'"
	Aug 07 18:17:08 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:17:08Z" level=error msg="error getting RW layer size for container ID '88ef6e03a7d49636d4ec885944dcc92b4c233c0740da2e8f511ce9078e817eb9': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/88ef6e03a7d49636d4ec885944dcc92b4c233c0740da2e8f511ce9078e817eb9/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:17:08 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:17:08Z" level=error msg="Set backoffDuration to : 1m0s for container ID '88ef6e03a7d49636d4ec885944dcc92b4c233c0740da2e8f511ce9078e817eb9'"
	Aug 07 18:17:08 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:17:08Z" level=error msg="error getting RW layer size for container ID '333ea0f6bde6b1ef97dfdfac80a9d448a8898827c7df556176cd4ea6cc523a33': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/333ea0f6bde6b1ef97dfdfac80a9d448a8898827c7df556176cd4ea6cc523a33/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:17:08 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:17:08Z" level=error msg="Set backoffDuration to : 1m0s for container ID '333ea0f6bde6b1ef97dfdfac80a9d448a8898827c7df556176cd4ea6cc523a33'"
	Aug 07 18:17:08 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:17:08Z" level=error msg="error getting RW layer size for container ID '60d38309b3f444639ea52ae1b7c219816eaed13bbb7eb6ef22b5ecb5ee8fc804': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/60d38309b3f444639ea52ae1b7c219816eaed13bbb7eb6ef22b5ecb5ee8fc804/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:17:08 functional-100700 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 07 18:17:08 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:17:08Z" level=error msg="Set backoffDuration to : 1m0s for container ID '60d38309b3f444639ea52ae1b7c219816eaed13bbb7eb6ef22b5ecb5ee8fc804'"
	Aug 07 18:17:08 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:17:08Z" level=error msg="error getting RW layer size for container ID '03079679d68cc19538e02f39c48251b50393f7dd981e609af3dcc96c420cd29e': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/03079679d68cc19538e02f39c48251b50393f7dd981e609af3dcc96c420cd29e/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:17:08 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:17:08Z" level=error msg="Set backoffDuration to : 1m0s for container ID '03079679d68cc19538e02f39c48251b50393f7dd981e609af3dcc96c420cd29e'"
	Aug 07 18:17:08 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:17:08Z" level=error msg="error getting RW layer size for container ID '8e6d65d222ddafeb197a6bdeeb312ce4cb757e9ed4da126acc744cc3b9f1550e': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/8e6d65d222ddafeb197a6bdeeb312ce4cb757e9ed4da126acc744cc3b9f1550e/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:17:08 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:17:08Z" level=error msg="Set backoffDuration to : 1m0s for container ID '8e6d65d222ddafeb197a6bdeeb312ce4cb757e9ed4da126acc744cc3b9f1550e'"
	Aug 07 18:17:08 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:17:08Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get image list from docker"
	Aug 07 18:17:08 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:17:08Z" level=error msg="error getting RW layer size for container ID '76120dfe1c32eb89c2bb01fc16ebaad087df9e517d03ed5169fb50db579cfe65': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/76120dfe1c32eb89c2bb01fc16ebaad087df9e517d03ed5169fb50db579cfe65/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:17:08 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:17:08Z" level=error msg="Set backoffDuration to : 1m0s for container ID '76120dfe1c32eb89c2bb01fc16ebaad087df9e517d03ed5169fb50db579cfe65'"
	Aug 07 18:17:08 functional-100700 systemd[1]: Failed to start Docker Application Container Engine.
	Aug 07 18:17:09 functional-100700 systemd[1]: docker.service: Scheduled restart job, restart counter is at 3.
	Aug 07 18:17:09 functional-100700 systemd[1]: Stopped Docker Application Container Engine.
	Aug 07 18:17:09 functional-100700 systemd[1]: Starting Docker Application Container Engine...
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-08-07T18:17:11Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.315437] systemd-fstab-generator[4011]: Ignoring "noauto" option for root device
	[  +5.331377] kauditd_printk_skb: 89 callbacks suppressed
	[  +8.070413] systemd-fstab-generator[4649]: Ignoring "noauto" option for root device
	[  +0.207492] systemd-fstab-generator[4660]: Ignoring "noauto" option for root device
	[  +0.210288] systemd-fstab-generator[4672]: Ignoring "noauto" option for root device
	[  +0.283847] systemd-fstab-generator[4687]: Ignoring "noauto" option for root device
	[  +0.989188] systemd-fstab-generator[4863]: Ignoring "noauto" option for root device
	[Aug 7 17:56] systemd-fstab-generator[4990]: Ignoring "noauto" option for root device
	[  +0.108824] kauditd_printk_skb: 137 callbacks suppressed
	[  +6.503684] kauditd_printk_skb: 52 callbacks suppressed
	[ +11.988991] systemd-fstab-generator[5884]: Ignoring "noauto" option for root device
	[  +0.145733] kauditd_printk_skb: 31 callbacks suppressed
	[Aug 7 17:59] systemd-fstab-generator[7536]: Ignoring "noauto" option for root device
	[  +0.151220] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.513441] systemd-fstab-generator[7585]: Ignoring "noauto" option for root device
	[  +0.282699] systemd-fstab-generator[7598]: Ignoring "noauto" option for root device
	[  +0.340219] systemd-fstab-generator[7612]: Ignoring "noauto" option for root device
	[  +5.344642] kauditd_printk_skb: 89 callbacks suppressed
	[Aug 7 18:12] systemd-fstab-generator[11104]: Ignoring "noauto" option for root device
	[Aug 7 18:13] systemd-fstab-generator[11514]: Ignoring "noauto" option for root device
	[  +0.128359] kauditd_printk_skb: 12 callbacks suppressed
	[Aug 7 18:17] systemd-fstab-generator[12657]: Ignoring "noauto" option for root device
	[  +0.123972] kauditd_printk_skb: 12 callbacks suppressed
	[Aug 7 18:18] systemd-fstab-generator[13086]: Ignoring "noauto" option for root device
	[  +0.146348] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 18:18:09 up 27 min,  0 users,  load average: 0.09, 0.04, 0.07
	Linux functional-100700 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Aug 07 18:18:06 functional-100700 kubelet[4998]: E0807 18:18:06.860987    4998 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-100700\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-100700?resourceVersion=0&timeout=10s\": dial tcp 172.28.235.211:8441: connect: connection refused"
	Aug 07 18:18:06 functional-100700 kubelet[4998]: E0807 18:18:06.862050    4998 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-100700\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-100700?timeout=10s\": dial tcp 172.28.235.211:8441: connect: connection refused"
	Aug 07 18:18:06 functional-100700 kubelet[4998]: E0807 18:18:06.863353    4998 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-100700\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-100700?timeout=10s\": dial tcp 172.28.235.211:8441: connect: connection refused"
	Aug 07 18:18:06 functional-100700 kubelet[4998]: E0807 18:18:06.864499    4998 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-100700\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-100700?timeout=10s\": dial tcp 172.28.235.211:8441: connect: connection refused"
	Aug 07 18:18:06 functional-100700 kubelet[4998]: E0807 18:18:06.866034    4998 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-100700\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-100700?timeout=10s\": dial tcp 172.28.235.211:8441: connect: connection refused"
	Aug 07 18:18:06 functional-100700 kubelet[4998]: E0807 18:18:06.866202    4998 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	Aug 07 18:18:07 functional-100700 kubelet[4998]: E0807 18:18:07.948871    4998 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-100700?timeout=10s\": dial tcp 172.28.235.211:8441: connect: connection refused" interval="7s"
	Aug 07 18:18:08 functional-100700 kubelet[4998]: E0807 18:18:08.363957    4998 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events/kube-apiserver-functional-100700.17e9841fb36ee3f5\": dial tcp 172.28.235.211:8441: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-functional-100700.17e9841fb36ee3f5  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-functional-100700,UID:00b3db9060a30b06edb713820a5caeb5,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://172.28.235.211:8441/readyz\": dial tcp 172.28.235.211:8441: connect: connection refused,Source:EventSource{Component:kubelet,Host:functional-100700,},FirstTimestamp:2024-08-07 18:00:04.135166965 +0000 UTC m=+242.626871687,LastTimes
tamp:2024-08-07 18:00:05.133087932 +0000 UTC m=+243.624792654,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-100700,}"
	Aug 07 18:18:09 functional-100700 kubelet[4998]: E0807 18:18:09.231084    4998 kubelet.go:2370] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 18m15.974036683s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	Aug 07 18:18:09 functional-100700 kubelet[4998]: E0807 18:18:09.322384    4998 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Aug 07 18:18:09 functional-100700 kubelet[4998]: E0807 18:18:09.322461    4998 container_log_manager.go:194] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:18:09 functional-100700 kubelet[4998]: E0807 18:18:09.322514    4998 remote_runtime.go:294] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Aug 07 18:18:09 functional-100700 kubelet[4998]: E0807 18:18:09.322678    4998 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:18:09 functional-100700 kubelet[4998]: E0807 18:18:09.322720    4998 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:18:09 functional-100700 kubelet[4998]: E0807 18:18:09.322779    4998 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Aug 07 18:18:09 functional-100700 kubelet[4998]: E0807 18:18:09.322829    4998 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:18:09 functional-100700 kubelet[4998]: E0807 18:18:09.323262    4998 remote_image.go:232] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:18:09 functional-100700 kubelet[4998]: E0807 18:18:09.323437    4998 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:18:09 functional-100700 kubelet[4998]: E0807 18:18:09.323483    4998 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Aug 07 18:18:09 functional-100700 kubelet[4998]: E0807 18:18:09.323520    4998 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:18:09 functional-100700 kubelet[4998]: I0807 18:18:09.323544    4998 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:18:09 functional-100700 kubelet[4998]: E0807 18:18:09.325258    4998 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Aug 07 18:18:09 functional-100700 kubelet[4998]: E0807 18:18:09.325306    4998 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Aug 07 18:18:09 functional-100700 kubelet[4998]: E0807 18:18:09.325893    4998 kubelet.go:1436] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	Aug 07 18:18:09 functional-100700 kubelet[4998]: E0807 18:18:09.445074    4998 kubelet.go:2919] "Container runtime not ready" runtimeReady="RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/version\": read unix @->/run/docker.sock: read: connection reset by peer"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0807 18:14:11.362441    9036 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0807 18:15:08.460851    9036 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0807 18:15:08.507723    9036 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0807 18:16:08.650618    9036 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0807 18:16:08.696633    9036 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0807 18:16:08.728771    9036 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0807 18:16:08.765725    9036 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0807 18:17:08.877625    9036 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0807 18:17:08.924168    9036 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-100700 -n functional-100700
E0807 18:18:20.476980    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\client.crt: The system cannot find the path specified.
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-100700 -n functional-100700: exit status 2 (12.919341s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0807 18:18:10.305443   10928 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-100700" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (273.11s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (582.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.28.235.211:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": context deadline exceeded
functional_test_pvc_test.go:44: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "integration-test=storage-provisioner" failed to start within 4m0s: context deadline exceeded ****
functional_test_pvc_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-100700 -n functional-100700
functional_test_pvc_test.go:44: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-100700 -n functional-100700: exit status 2 (12.7742429s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0807 18:17:42.675529    2560 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test_pvc_test.go:44: status error: exit status 2 (may be ok)
functional_test_pvc_test.go:44: "functional-100700" apiserver is not running, skipping kubectl commands (state="Stopped")
functional_test_pvc_test.go:45: failed waiting for storage-provisioner: integration-test=storage-provisioner within 4m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-100700 -n functional-100700
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-100700 -n functional-100700: exit status 2 (12.4628515s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0807 18:17:55.454801    3484 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-100700 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-100700 logs -n 25: (5m2.9291336s)
helpers_test.go:252: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	|------------|-----------------------------------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	|  Command   |                                                Args                                                 |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|------------|-----------------------------------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| ssh        | functional-100700 ssh sudo cat                                                                      | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:12 UTC | 07 Aug 24 18:12 UTC |
	|            | /etc/test/nested/copy/9660/hosts                                                                    |                   |                   |         |                     |                     |
	| docker-env | functional-100700 docker-env                                                                        | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:12 UTC |                     |
	| ssh        | functional-100700 ssh sudo cat                                                                      | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:12 UTC | 07 Aug 24 18:12 UTC |
	|            | /etc/ssl/certs/9660.pem                                                                             |                   |                   |         |                     |                     |
	| ssh        | functional-100700 ssh cat                                                                           | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:12 UTC | 07 Aug 24 18:12 UTC |
	|            | /etc/hostname                                                                                       |                   |                   |         |                     |                     |
	| ssh        | functional-100700 ssh sudo cat                                                                      | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:12 UTC | 07 Aug 24 18:12 UTC |
	|            | /usr/share/ca-certificates/9660.pem                                                                 |                   |                   |         |                     |                     |
	| cp         | functional-100700 cp                                                                                | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:12 UTC | 07 Aug 24 18:12 UTC |
	|            | testdata\cp-test.txt                                                                                |                   |                   |         |                     |                     |
	|            | /home/docker/cp-test.txt                                                                            |                   |                   |         |                     |                     |
	| ssh        | functional-100700 ssh sudo cat                                                                      | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:12 UTC | 07 Aug 24 18:12 UTC |
	|            | /etc/ssl/certs/51391683.0                                                                           |                   |                   |         |                     |                     |
	| ssh        | functional-100700 ssh -n                                                                            | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:12 UTC | 07 Aug 24 18:12 UTC |
	|            | functional-100700 sudo cat                                                                          |                   |                   |         |                     |                     |
	|            | /home/docker/cp-test.txt                                                                            |                   |                   |         |                     |                     |
	| ssh        | functional-100700 ssh sudo cat                                                                      | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:12 UTC | 07 Aug 24 18:12 UTC |
	|            | /etc/ssl/certs/96602.pem                                                                            |                   |                   |         |                     |                     |
	| cp         | functional-100700 cp functional-100700:/home/docker/cp-test.txt                                     | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:12 UTC | 07 Aug 24 18:13 UTC |
	|            | C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalparallelCpCmd1063066128\001\cp-test.txt |                   |                   |         |                     |                     |
	| ssh        | functional-100700 ssh sudo cat                                                                      | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:12 UTC | 07 Aug 24 18:13 UTC |
	|            | /usr/share/ca-certificates/96602.pem                                                                |                   |                   |         |                     |                     |
	| ssh        | functional-100700 ssh -n                                                                            | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:13 UTC | 07 Aug 24 18:13 UTC |
	|            | functional-100700 sudo cat                                                                          |                   |                   |         |                     |                     |
	|            | /home/docker/cp-test.txt                                                                            |                   |                   |         |                     |                     |
	| ssh        | functional-100700 ssh sudo cat                                                                      | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:13 UTC | 07 Aug 24 18:13 UTC |
	|            | /etc/ssl/certs/3ec20f2e.0                                                                           |                   |                   |         |                     |                     |
	| cp         | functional-100700 cp                                                                                | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:13 UTC | 07 Aug 24 18:13 UTC |
	|            | testdata\cp-test.txt                                                                                |                   |                   |         |                     |                     |
	|            | /tmp/does/not/exist/cp-test.txt                                                                     |                   |                   |         |                     |                     |
	| ssh        | functional-100700 ssh -n                                                                            | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:13 UTC | 07 Aug 24 18:13 UTC |
	|            | functional-100700 sudo cat                                                                          |                   |                   |         |                     |                     |
	|            | /tmp/does/not/exist/cp-test.txt                                                                     |                   |                   |         |                     |                     |
	| tunnel     | functional-100700 tunnel                                                                            | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:13 UTC |                     |
	|            | --alsologtostderr                                                                                   |                   |                   |         |                     |                     |
	| tunnel     | functional-100700 tunnel                                                                            | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:13 UTC |                     |
	|            | --alsologtostderr                                                                                   |                   |                   |         |                     |                     |
	| tunnel     | functional-100700 tunnel                                                                            | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:13 UTC |                     |
	|            | --alsologtostderr                                                                                   |                   |                   |         |                     |                     |
	| addons     | functional-100700 addons list                                                                       | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:13 UTC | 07 Aug 24 18:13 UTC |
	| addons     | functional-100700 addons list                                                                       | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:13 UTC | 07 Aug 24 18:13 UTC |
	|            | -o json                                                                                             |                   |                   |         |                     |                     |
	| service    | functional-100700 service list                                                                      | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:17 UTC |                     |
	| service    | functional-100700 service list                                                                      | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:17 UTC |                     |
	|            | -o json                                                                                             |                   |                   |         |                     |                     |
	| service    | functional-100700 service                                                                           | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:17 UTC |                     |
	|            | --namespace=default --https                                                                         |                   |                   |         |                     |                     |
	|            | --url hello-node                                                                                    |                   |                   |         |                     |                     |
	| service    | functional-100700                                                                                   | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:17 UTC |                     |
	|            | service hello-node --url                                                                            |                   |                   |         |                     |                     |
	|            | --format={{.IP}}                                                                                    |                   |                   |         |                     |                     |
	| service    | functional-100700 service                                                                           | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:17 UTC |                     |
	|            | hello-node --url                                                                                    |                   |                   |         |                     |                     |
	|------------|-----------------------------------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/07 17:58:33
	Running on machine: minikube6
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0807 17:58:33.249534    2092 out.go:291] Setting OutFile to fd 728 ...
	I0807 17:58:33.250111    2092 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 17:58:33.250111    2092 out.go:304] Setting ErrFile to fd 800...
	I0807 17:58:33.250179    2092 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 17:58:33.269540    2092 out.go:298] Setting JSON to false
	I0807 17:58:33.272574    2092 start.go:129] hostinfo: {"hostname":"minikube6","uptime":315442,"bootTime":1722738070,"procs":188,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4717 Build 19045.4717","kernelVersion":"10.0.19045.4717 Build 19045.4717","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0807 17:58:33.272574    2092 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0807 17:58:33.277577    2092 out.go:177] * [functional-100700] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4717 Build 19045.4717
	I0807 17:58:33.281028    2092 notify.go:220] Checking for updates...
	I0807 17:58:33.284043    2092 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0807 17:58:33.286477    2092 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0807 17:58:33.289594    2092 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0807 17:58:33.292627    2092 out.go:177]   - MINIKUBE_LOCATION=19389
	I0807 17:58:33.295302    2092 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0807 17:58:33.298825    2092 config.go:182] Loaded profile config "functional-100700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 17:58:33.298825    2092 driver.go:392] Setting default libvirt URI to qemu:///system
	I0807 17:58:38.714558    2092 out.go:177] * Using the hyperv driver based on existing profile
	I0807 17:58:38.718761    2092 start.go:297] selected driver: hyperv
	I0807 17:58:38.718761    2092 start.go:901] validating driver "hyperv" against &{Name:functional-100700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.3 ClusterName:functional-100700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.235.211 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 17:58:38.718761    2092 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0807 17:58:38.771046    2092 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0807 17:58:38.771115    2092 cni.go:84] Creating CNI manager for ""
	I0807 17:58:38.771115    2092 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0807 17:58:38.771254    2092 start.go:340] cluster config:
	{Name:functional-100700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-100700 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.235.211 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 17:58:38.771533    2092 iso.go:125] acquiring lock: {Name:mk51465eaa337f49a286b30986b5f3d5f63e6787 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 17:58:38.776902    2092 out.go:177] * Starting "functional-100700" primary control-plane node in "functional-100700" cluster
	I0807 17:58:38.780865    2092 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0807 17:58:38.780865    2092 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0807 17:58:38.780865    2092 cache.go:56] Caching tarball of preloaded images
	I0807 17:58:38.780865    2092 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0807 17:58:38.780865    2092 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0807 17:58:38.781934    2092 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-100700\config.json ...
	I0807 17:58:38.783866    2092 start.go:360] acquireMachinesLock for functional-100700: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 17:58:38.783866    2092 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-100700"
	I0807 17:58:38.783866    2092 start.go:96] Skipping create...Using existing machine configuration
	I0807 17:58:38.783866    2092 fix.go:54] fixHost starting: 
	I0807 17:58:38.784885    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:58:41.603969    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:58:41.604807    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:58:41.604807    2092 fix.go:112] recreateIfNeeded on functional-100700: state=Running err=<nil>
	W0807 17:58:41.604807    2092 fix.go:138] unexpected machine state, will restart: <nil>
	I0807 17:58:41.608533    2092 out.go:177] * Updating the running hyperv "functional-100700" VM ...
	I0807 17:58:41.613016    2092 machine.go:94] provisionDockerMachine start ...
	I0807 17:58:41.613016    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:58:43.839989    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:58:43.839989    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:58:43.840252    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:58:46.452954    2092 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:58:46.452954    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:58:46.459781    2092 main.go:141] libmachine: Using SSH client type: native
	I0807 17:58:46.460474    2092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.235.211 22 <nil> <nil>}
	I0807 17:58:46.460474    2092 main.go:141] libmachine: About to run SSH command:
	hostname
	I0807 17:58:46.591805    2092 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-100700
	
	I0807 17:58:46.591805    2092 buildroot.go:166] provisioning hostname "functional-100700"
	I0807 17:58:46.591805    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:58:48.755211    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:58:48.755427    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:58:48.755465    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:58:51.336039    2092 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:58:51.336039    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:58:51.342623    2092 main.go:141] libmachine: Using SSH client type: native
	I0807 17:58:51.342623    2092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.235.211 22 <nil> <nil>}
	I0807 17:58:51.342623    2092 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-100700 && echo "functional-100700" | sudo tee /etc/hostname
	I0807 17:58:51.496578    2092 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-100700
	
	I0807 17:58:51.496578    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:58:53.691439    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:58:53.691439    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:58:53.691439    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:58:56.301964    2092 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:58:56.301964    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:58:56.307512    2092 main.go:141] libmachine: Using SSH client type: native
	I0807 17:58:56.308194    2092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.235.211 22 <nil> <nil>}
	I0807 17:58:56.308194    2092 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-100700' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-100700/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-100700' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0807 17:58:56.438766    2092 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0807 17:58:56.438766    2092 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0807 17:58:56.438900    2092 buildroot.go:174] setting up certificates
	I0807 17:58:56.438900    2092 provision.go:84] configureAuth start
	I0807 17:58:56.438900    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:58:58.655995    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:58:58.656961    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:58:58.657071    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:59:01.290427    2092 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:59:01.290427    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:01.290831    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:59:03.469316    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:59:03.469316    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:03.469551    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:59:06.075723    2092 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:59:06.075723    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:06.075723    2092 provision.go:143] copyHostCerts
	I0807 17:59:06.075723    2092 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0807 17:59:06.075723    2092 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0807 17:59:06.076549    2092 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0807 17:59:06.077992    2092 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0807 17:59:06.077992    2092 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0807 17:59:06.078322    2092 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0807 17:59:06.079146    2092 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0807 17:59:06.079146    2092 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0807 17:59:06.079980    2092 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0807 17:59:06.080688    2092 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-100700 san=[127.0.0.1 172.28.235.211 functional-100700 localhost minikube]
	I0807 17:59:06.262311    2092 provision.go:177] copyRemoteCerts
	I0807 17:59:06.274334    2092 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0807 17:59:06.274334    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:59:08.466099    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:59:08.466421    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:08.466494    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:59:11.061934    2092 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:59:11.061934    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:11.061934    2092 sshutil.go:53] new ssh client: &{IP:172.28.235.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-100700\id_rsa Username:docker}
	I0807 17:59:11.172314    2092 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8979173s)
	I0807 17:59:11.172848    2092 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0807 17:59:11.223362    2092 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0807 17:59:11.271809    2092 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0807 17:59:11.319487    2092 provision.go:87] duration metric: took 14.8803963s to configureAuth
	I0807 17:59:11.319487    2092 buildroot.go:189] setting minikube options for container-runtime
	I0807 17:59:11.320542    2092 config.go:182] Loaded profile config "functional-100700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 17:59:11.320588    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:59:13.493491    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:59:13.493491    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:13.493879    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:59:16.077977    2092 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:59:16.077977    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:16.088668    2092 main.go:141] libmachine: Using SSH client type: native
	I0807 17:59:16.088783    2092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.235.211 22 <nil> <nil>}
	I0807 17:59:16.088783    2092 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0807 17:59:16.217785    2092 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0807 17:59:16.217785    2092 buildroot.go:70] root file system type: tmpfs
	I0807 17:59:16.218443    2092 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0807 17:59:16.218443    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:59:18.421400    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:59:18.421400    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:18.421838    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:59:21.023576    2092 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:59:21.024581    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:21.030466    2092 main.go:141] libmachine: Using SSH client type: native
	I0807 17:59:21.031160    2092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.235.211 22 <nil> <nil>}
	I0807 17:59:21.031160    2092 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0807 17:59:21.200213    2092 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0807 17:59:21.200853    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:59:23.460840    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:59:23.460840    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:23.461413    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:59:26.144922    2092 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:59:26.144922    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:26.151032    2092 main.go:141] libmachine: Using SSH client type: native
	I0807 17:59:26.151032    2092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.235.211 22 <nil> <nil>}
	I0807 17:59:26.151032    2092 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0807 17:59:26.288578    2092 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0807 17:59:26.288578    2092 machine.go:97] duration metric: took 44.6749905s to provisionDockerMachine
	I0807 17:59:26.289136    2092 start.go:293] postStartSetup for "functional-100700" (driver="hyperv")
	I0807 17:59:26.289136    2092 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0807 17:59:26.303659    2092 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0807 17:59:26.303659    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:59:28.549324    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:59:28.550453    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:28.550627    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:59:31.258516    2092 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:59:31.258516    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:31.259399    2092 sshutil.go:53] new ssh client: &{IP:172.28.235.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-100700\id_rsa Username:docker}
	I0807 17:59:31.366436    2092 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.0626314s)
	I0807 17:59:31.378993    2092 ssh_runner.go:195] Run: cat /etc/os-release
	I0807 17:59:31.386284    2092 info.go:137] Remote host: Buildroot 2023.02.9
	I0807 17:59:31.386284    2092 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0807 17:59:31.386889    2092 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0807 17:59:31.387933    2092 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96602.pem -> 96602.pem in /etc/ssl/certs
	I0807 17:59:31.388907    2092 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\9660\hosts -> hosts in /etc/test/nested/copy/9660
	I0807 17:59:31.400272    2092 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/9660
	I0807 17:59:31.419285    2092 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96602.pem --> /etc/ssl/certs/96602.pem (1708 bytes)
	I0807 17:59:31.469876    2092 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\9660\hosts --> /etc/test/nested/copy/9660/hosts (40 bytes)
	I0807 17:59:31.522816    2092 start.go:296] duration metric: took 5.2336131s for postStartSetup
	I0807 17:59:31.522964    2092 fix.go:56] duration metric: took 52.7384232s for fixHost
	I0807 17:59:31.522964    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:59:33.801714    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:59:33.801714    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:33.802069    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:59:36.454493    2092 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:59:36.455616    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:36.460762    2092 main.go:141] libmachine: Using SSH client type: native
	I0807 17:59:36.461590    2092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.235.211 22 <nil> <nil>}
	I0807 17:59:36.461590    2092 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0807 17:59:36.584817    2092 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723053576.599616702
	
	I0807 17:59:36.584817    2092 fix.go:216] guest clock: 1723053576.599616702
	I0807 17:59:36.584817    2092 fix.go:229] Guest: 2024-08-07 17:59:36.599616702 +0000 UTC Remote: 2024-08-07 17:59:31.5229646 +0000 UTC m=+58.443653901 (delta=5.076652102s)
	I0807 17:59:36.584817    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:59:38.789376    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:59:38.789376    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:38.789376    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:59:41.408403    2092 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:59:41.408403    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:41.415021    2092 main.go:141] libmachine: Using SSH client type: native
	I0807 17:59:41.415132    2092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.235.211 22 <nil> <nil>}
	I0807 17:59:41.415132    2092 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1723053576
	I0807 17:59:41.554262    2092 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Aug  7 17:59:36 UTC 2024
	
	I0807 17:59:41.554342    2092 fix.go:236] clock set: Wed Aug  7 17:59:36 UTC 2024
	 (err=<nil>)
	I0807 17:59:41.554342    2092 start.go:83] releasing machines lock for "functional-100700", held for 1m2.7696728s
	I0807 17:59:41.554664    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:59:43.743629    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:59:43.743629    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:43.743690    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:59:46.402358    2092 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:59:46.402358    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:46.408259    2092 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0807 17:59:46.408354    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:59:46.418508    2092 ssh_runner.go:195] Run: cat /version.json
	I0807 17:59:46.418508    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:59:48.678664    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:59:48.678664    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:48.678947    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:59:48.679407    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:59:48.679407    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:48.679407    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:59:51.480814    2092 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:59:51.481012    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:51.481427    2092 sshutil.go:53] new ssh client: &{IP:172.28.235.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-100700\id_rsa Username:docker}
	I0807 17:59:51.507337    2092 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:59:51.507337    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:51.508062    2092 sshutil.go:53] new ssh client: &{IP:172.28.235.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-100700\id_rsa Username:docker}
	I0807 17:59:51.568433    2092 ssh_runner.go:235] Completed: cat /version.json: (5.149744s)
	I0807 17:59:51.580326    2092 ssh_runner.go:195] Run: systemctl --version
	I0807 17:59:51.588096    2092 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.1797709s)
	W0807 17:59:51.588200    2092 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0807 17:59:51.605332    2092 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0807 17:59:51.614105    2092 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0807 17:59:51.625622    2092 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0807 17:59:51.647469    2092 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0807 17:59:51.647469    2092 start.go:495] detecting cgroup driver to use...
	I0807 17:59:51.647469    2092 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0807 17:59:51.698634    2092 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	W0807 17:59:51.702217    2092 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0807 17:59:51.702712    2092 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0807 17:59:51.742110    2092 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0807 17:59:51.763294    2092 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0807 17:59:51.777226    2092 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0807 17:59:51.810899    2092 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0807 17:59:51.842846    2092 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0807 17:59:51.874465    2092 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0807 17:59:51.906374    2092 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0807 17:59:51.940856    2092 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0807 17:59:51.972522    2092 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0807 17:59:52.005392    2092 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0807 17:59:52.039394    2092 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0807 17:59:52.069956    2092 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0807 17:59:52.100774    2092 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 17:59:52.376248    2092 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0807 17:59:52.411639    2092 start.go:495] detecting cgroup driver to use...
	I0807 17:59:52.424848    2092 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0807 17:59:52.465672    2092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0807 17:59:52.507105    2092 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0807 17:59:52.559294    2092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0807 17:59:52.602621    2092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0807 17:59:52.628877    2092 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0807 17:59:52.677947    2092 ssh_runner.go:195] Run: which cri-dockerd
	I0807 17:59:52.696445    2092 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0807 17:59:52.713779    2092 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0807 17:59:52.759506    2092 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0807 17:59:53.063312    2092 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0807 17:59:53.341833    2092 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0807 17:59:53.341833    2092 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0807 17:59:53.390184    2092 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 17:59:53.669002    2092 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0807 18:01:05.110860    2092 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.440852s)
	I0807 18:01:05.123373    2092 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0807 18:01:05.210998    2092 out.go:177] 
	W0807 18:01:05.214928    2092 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 07 17:52:16 functional-100700 systemd[1]: Starting Docker Application Container Engine...
	Aug 07 17:52:16 functional-100700 dockerd[674]: time="2024-08-07T17:52:16.458341985Z" level=info msg="Starting up"
	Aug 07 17:52:16 functional-100700 dockerd[674]: time="2024-08-07T17:52:16.459483937Z" level=info msg="containerd not running, starting managed containerd"
	Aug 07 17:52:16 functional-100700 dockerd[674]: time="2024-08-07T17:52:16.460719594Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=680
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.493113277Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.523238457Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.523275259Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.523339562Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.523356263Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.523427766Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.523446067Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.523804083Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.523901688Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.523925089Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.523938689Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.524034194Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.524376109Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.527352746Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.527600157Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.528068478Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.528219485Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.528416294Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.528629904Z" level=info msg="metadata content store policy set" policy=shared
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.581871643Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.581973248Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.582080552Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.582105454Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.582123954Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.582283562Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.582887889Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583040296Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583147301Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583169102Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583185303Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583200004Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583228805Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583248006Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583336710Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583450015Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583471716Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583486017Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583527319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583544019Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583560020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583595322Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583708227Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583728328Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583743029Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583769930Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583818932Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583858834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583890635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583921137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583935837Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583972939Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.584001540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.584017041Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.584038742Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.584125046Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.584167548Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.584182949Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.584202250Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.584215250Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.584231651Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.584243051Z" level=info msg="NRI interface is disabled by configuration."
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.584478362Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.584610368Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.584865080Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.585009287Z" level=info msg="containerd successfully booted in 0.093145s"
	Aug 07 17:52:17 functional-100700 dockerd[674]: time="2024-08-07T17:52:17.539100050Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 07 17:52:17 functional-100700 dockerd[674]: time="2024-08-07T17:52:17.582623719Z" level=info msg="Loading containers: start."
	Aug 07 17:52:17 functional-100700 dockerd[674]: time="2024-08-07T17:52:17.757771440Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 07 17:52:17 functional-100700 dockerd[674]: time="2024-08-07T17:52:17.984107768Z" level=info msg="Loading containers: done."
	Aug 07 17:52:18 functional-100700 dockerd[674]: time="2024-08-07T17:52:18.005649861Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Aug 07 17:52:18 functional-100700 dockerd[674]: time="2024-08-07T17:52:18.006524396Z" level=info msg="Daemon has completed initialization"
	Aug 07 17:52:18 functional-100700 dockerd[674]: time="2024-08-07T17:52:18.114597281Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 07 17:52:18 functional-100700 dockerd[674]: time="2024-08-07T17:52:18.114734986Z" level=info msg="API listen on [::]:2376"
	Aug 07 17:52:18 functional-100700 systemd[1]: Started Docker Application Container Engine.
	Aug 07 17:52:49 functional-100700 systemd[1]: Stopping Docker Application Container Engine...
	Aug 07 17:52:49 functional-100700 dockerd[674]: time="2024-08-07T17:52:49.863488345Z" level=info msg="Processing signal 'terminated'"
	Aug 07 17:52:49 functional-100700 dockerd[674]: time="2024-08-07T17:52:49.866317260Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 07 17:52:49 functional-100700 dockerd[674]: time="2024-08-07T17:52:49.866749062Z" level=info msg="Daemon shutdown complete"
	Aug 07 17:52:49 functional-100700 dockerd[674]: time="2024-08-07T17:52:49.867048363Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 07 17:52:49 functional-100700 dockerd[674]: time="2024-08-07T17:52:49.867142864Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 07 17:52:50 functional-100700 systemd[1]: docker.service: Deactivated successfully.
	Aug 07 17:52:50 functional-100700 systemd[1]: Stopped Docker Application Container Engine.
	Aug 07 17:52:50 functional-100700 systemd[1]: Starting Docker Application Container Engine...
	Aug 07 17:52:50 functional-100700 dockerd[1088]: time="2024-08-07T17:52:50.924908641Z" level=info msg="Starting up"
	Aug 07 17:52:50 functional-100700 dockerd[1088]: time="2024-08-07T17:52:50.926025447Z" level=info msg="containerd not running, starting managed containerd"
	Aug 07 17:52:50 functional-100700 dockerd[1088]: time="2024-08-07T17:52:50.927064452Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1094
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.958194110Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.986326653Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.986364954Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.986401554Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.986436154Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.986479654Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.986495054Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.986720855Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.986821556Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.986844356Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.986855856Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.986880556Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.987134958Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.990330074Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.990438474Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.990847676Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.990948477Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.991014577Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.991067378Z" level=info msg="metadata content store policy set" policy=shared
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.991319879Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.991378979Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.991397879Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.991412779Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.991428979Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.991496580Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.992185983Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.992409884Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.992647286Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.992672686Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.992688386Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.992794286Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993149888Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993243389Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993271489Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993292489Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993307089Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993318789Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993338389Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993353889Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993377989Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993393189Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993409289Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993422089Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993433690Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993445490Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993457890Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993471990Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993490190Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993561890Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993582490Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993597890Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993619090Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993632991Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993644091Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993764891Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993878492Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993897692Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993910692Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.994016892Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.994112293Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.994155593Z" level=info msg="NRI interface is disabled by configuration."
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.994503995Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.994761996Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.994864197Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.995059098Z" level=info msg="containerd successfully booted in 0.037887s"
	Aug 07 17:52:51 functional-100700 dockerd[1088]: time="2024-08-07T17:52:51.976962789Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 07 17:52:52 functional-100700 dockerd[1088]: time="2024-08-07T17:52:52.014319979Z" level=info msg="Loading containers: start."
	Aug 07 17:52:52 functional-100700 dockerd[1088]: time="2024-08-07T17:52:52.153625988Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 07 17:52:52 functional-100700 dockerd[1088]: time="2024-08-07T17:52:52.280398732Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 07 17:52:52 functional-100700 dockerd[1088]: time="2024-08-07T17:52:52.383509956Z" level=info msg="Loading containers: done."
	Aug 07 17:52:52 functional-100700 dockerd[1088]: time="2024-08-07T17:52:52.407173376Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Aug 07 17:52:52 functional-100700 dockerd[1088]: time="2024-08-07T17:52:52.407304177Z" level=info msg="Daemon has completed initialization"
	Aug 07 17:52:52 functional-100700 dockerd[1088]: time="2024-08-07T17:52:52.460329447Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 07 17:52:52 functional-100700 dockerd[1088]: time="2024-08-07T17:52:52.460475947Z" level=info msg="API listen on [::]:2376"
	Aug 07 17:52:52 functional-100700 systemd[1]: Started Docker Application Container Engine.
	Aug 07 17:53:01 functional-100700 dockerd[1088]: time="2024-08-07T17:53:01.251394538Z" level=info msg="Processing signal 'terminated'"
	Aug 07 17:53:01 functional-100700 dockerd[1088]: time="2024-08-07T17:53:01.254204052Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 07 17:53:01 functional-100700 systemd[1]: Stopping Docker Application Container Engine...
	Aug 07 17:53:01 functional-100700 dockerd[1088]: time="2024-08-07T17:53:01.260155282Z" level=info msg="Daemon shutdown complete"
	Aug 07 17:53:01 functional-100700 dockerd[1088]: time="2024-08-07T17:53:01.260456984Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 07 17:53:01 functional-100700 dockerd[1088]: time="2024-08-07T17:53:01.260937686Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 07 17:53:02 functional-100700 systemd[1]: docker.service: Deactivated successfully.
	Aug 07 17:53:02 functional-100700 systemd[1]: Stopped Docker Application Container Engine.
	Aug 07 17:53:02 functional-100700 systemd[1]: Starting Docker Application Container Engine...
	Aug 07 17:53:02 functional-100700 dockerd[1438]: time="2024-08-07T17:53:02.321797079Z" level=info msg="Starting up"
	Aug 07 17:53:02 functional-100700 dockerd[1438]: time="2024-08-07T17:53:02.323692689Z" level=info msg="containerd not running, starting managed containerd"
	Aug 07 17:53:02 functional-100700 dockerd[1438]: time="2024-08-07T17:53:02.324967095Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1444
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.356801457Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.391549934Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.391620134Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.391684835Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.391704135Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.391750835Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.391768635Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.391946636Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.392117037Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.392142737Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.392156437Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.392185437Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.392310838Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.395280253Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.395439954Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.395604655Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.395701255Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.395733555Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.396053557Z" level=info msg="metadata content store policy set" policy=shared
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.396602860Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.396804261Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.396886161Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.396963461Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397040262Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397105662Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397382264Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397664265Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397760665Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397781966Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397796766Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397810066Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397829466Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397849466Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397864866Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397877866Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397890366Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397902266Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397930866Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398086167Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398131667Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398146767Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398159868Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398173168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398186368Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398199368Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398230968Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398291368Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398305068Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398318768Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398347068Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398379069Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398633370Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398774671Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398795271Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398837271Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.399058072Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.399114872Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.399134672Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.399145573Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.399188373Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.399202473Z" level=info msg="NRI interface is disabled by configuration."
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.399579475Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.399779376Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.399959877Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.400151978Z" level=info msg="containerd successfully booted in 0.045445s"
	Aug 07 17:53:03 functional-100700 dockerd[1438]: time="2024-08-07T17:53:03.371421015Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 07 17:53:06 functional-100700 dockerd[1438]: time="2024-08-07T17:53:06.638419724Z" level=info msg="Loading containers: start."
	Aug 07 17:53:06 functional-100700 dockerd[1438]: time="2024-08-07T17:53:06.762102252Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 07 17:53:06 functional-100700 dockerd[1438]: time="2024-08-07T17:53:06.878637045Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 07 17:53:06 functional-100700 dockerd[1438]: time="2024-08-07T17:53:06.979158756Z" level=info msg="Loading containers: done."
	Aug 07 17:53:07 functional-100700 dockerd[1438]: time="2024-08-07T17:53:07.006779696Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Aug 07 17:53:07 functional-100700 dockerd[1438]: time="2024-08-07T17:53:07.006939697Z" level=info msg="Daemon has completed initialization"
	Aug 07 17:53:07 functional-100700 dockerd[1438]: time="2024-08-07T17:53:07.050899521Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 07 17:53:07 functional-100700 dockerd[1438]: time="2024-08-07T17:53:07.051782725Z" level=info msg="API listen on [::]:2376"
	Aug 07 17:53:07 functional-100700 systemd[1]: Started Docker Application Container Engine.
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.457114123Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.457756947Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.457814849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.458624579Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.566929844Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.567055349Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.567075749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.567173853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.642620715Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.643094533Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.643146835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.647326788Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.647829406Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.647978912Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.648649436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.651222930Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.841899112Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.842260225Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.842357529Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.842730042Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.987249334Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.987542945Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.987581346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.987882057Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:17 functional-100700 dockerd[1444]: time="2024-08-07T17:53:17.071713287Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:17 functional-100700 dockerd[1444]: time="2024-08-07T17:53:17.072501915Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:17 functional-100700 dockerd[1444]: time="2024-08-07T17:53:17.072657920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:17 functional-100700 dockerd[1444]: time="2024-08-07T17:53:17.072778524Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:17 functional-100700 dockerd[1444]: time="2024-08-07T17:53:17.120557979Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:17 functional-100700 dockerd[1444]: time="2024-08-07T17:53:17.120838189Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:17 functional-100700 dockerd[1444]: time="2024-08-07T17:53:17.121035196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:17 functional-100700 dockerd[1444]: time="2024-08-07T17:53:17.121432210Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:39 functional-100700 dockerd[1444]: time="2024-08-07T17:53:39.836342825Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:39 functional-100700 dockerd[1444]: time="2024-08-07T17:53:39.836494127Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:39 functional-100700 dockerd[1444]: time="2024-08-07T17:53:39.836527028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:39 functional-100700 dockerd[1444]: time="2024-08-07T17:53:39.836919632Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:40 functional-100700 dockerd[1444]: time="2024-08-07T17:53:40.031505099Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:40 functional-100700 dockerd[1444]: time="2024-08-07T17:53:40.031971604Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:40 functional-100700 dockerd[1444]: time="2024-08-07T17:53:40.032036705Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:40 functional-100700 dockerd[1444]: time="2024-08-07T17:53:40.032230607Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:40 functional-100700 dockerd[1444]: time="2024-08-07T17:53:40.071740773Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:40 functional-100700 dockerd[1444]: time="2024-08-07T17:53:40.071807874Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:40 functional-100700 dockerd[1444]: time="2024-08-07T17:53:40.071821974Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:40 functional-100700 dockerd[1444]: time="2024-08-07T17:53:40.072043276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:40 functional-100700 dockerd[1444]: time="2024-08-07T17:53:40.388937110Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:40 functional-100700 dockerd[1444]: time="2024-08-07T17:53:40.389400016Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:40 functional-100700 dockerd[1444]: time="2024-08-07T17:53:40.389566918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:40 functional-100700 dockerd[1444]: time="2024-08-07T17:53:40.390025223Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:41 functional-100700 dockerd[1444]: time="2024-08-07T17:53:41.011330360Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:41 functional-100700 dockerd[1444]: time="2024-08-07T17:53:41.011407583Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:41 functional-100700 dockerd[1444]: time="2024-08-07T17:53:41.013870604Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:41 functional-100700 dockerd[1444]: time="2024-08-07T17:53:41.017327916Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:41 functional-100700 dockerd[1444]: time="2024-08-07T17:53:41.053458090Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:41 functional-100700 dockerd[1444]: time="2024-08-07T17:53:41.053872712Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:41 functional-100700 dockerd[1444]: time="2024-08-07T17:53:41.054119484Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:41 functional-100700 dockerd[1444]: time="2024-08-07T17:53:41.054800983Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:48 functional-100700 dockerd[1444]: time="2024-08-07T17:53:48.064470635Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:48 functional-100700 dockerd[1444]: time="2024-08-07T17:53:48.064566761Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:48 functional-100700 dockerd[1444]: time="2024-08-07T17:53:48.064584666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:48 functional-100700 dockerd[1444]: time="2024-08-07T17:53:48.064692294Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:48 functional-100700 dockerd[1444]: time="2024-08-07T17:53:48.363127500Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:48 functional-100700 dockerd[1444]: time="2024-08-07T17:53:48.363436282Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:48 functional-100700 dockerd[1444]: time="2024-08-07T17:53:48.363741263Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:48 functional-100700 dockerd[1444]: time="2024-08-07T17:53:48.364157974Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:51 functional-100700 dockerd[1438]: time="2024-08-07T17:53:51.247657321Z" level=info msg="ignoring event" container=9d5becd51e7186ec4acc242dcdf88a4327987ec048e0ee52430eac221c2fc0fe module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:53:51 functional-100700 dockerd[1444]: time="2024-08-07T17:53:51.250057633Z" level=info msg="shim disconnected" id=9d5becd51e7186ec4acc242dcdf88a4327987ec048e0ee52430eac221c2fc0fe namespace=moby
	Aug 07 17:53:51 functional-100700 dockerd[1444]: time="2024-08-07T17:53:51.250177263Z" level=warning msg="cleaning up after shim disconnected" id=9d5becd51e7186ec4acc242dcdf88a4327987ec048e0ee52430eac221c2fc0fe namespace=moby
	Aug 07 17:53:51 functional-100700 dockerd[1444]: time="2024-08-07T17:53:51.250194468Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:53:51 functional-100700 dockerd[1438]: time="2024-08-07T17:53:51.423182591Z" level=info msg="ignoring event" container=2a2add9d4c7dacc6804bc6f6e21cf8821ae80a5ef75964e66784019654fe3bc3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:53:51 functional-100700 dockerd[1444]: time="2024-08-07T17:53:51.423885070Z" level=info msg="shim disconnected" id=2a2add9d4c7dacc6804bc6f6e21cf8821ae80a5ef75964e66784019654fe3bc3 namespace=moby
	Aug 07 17:53:51 functional-100700 dockerd[1444]: time="2024-08-07T17:53:51.424164141Z" level=warning msg="cleaning up after shim disconnected" id=2a2add9d4c7dacc6804bc6f6e21cf8821ae80a5ef75964e66784019654fe3bc3 namespace=moby
	Aug 07 17:53:51 functional-100700 dockerd[1444]: time="2024-08-07T17:53:51.424226557Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:42 functional-100700 systemd[1]: Stopping Docker Application Container Engine...
	Aug 07 17:55:42 functional-100700 dockerd[1438]: time="2024-08-07T17:55:42.811970533Z" level=info msg="Processing signal 'terminated'"
	Aug 07 17:55:43 functional-100700 dockerd[1438]: time="2024-08-07T17:55:43.080390772Z" level=info msg="ignoring event" container=f87ac0281bc2649782f39abdff8151e0c016bf26b3182a3df5c8579e21fe65d7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.081622815Z" level=info msg="shim disconnected" id=f87ac0281bc2649782f39abdff8151e0c016bf26b3182a3df5c8579e21fe65d7 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.082180634Z" level=warning msg="cleaning up after shim disconnected" id=f87ac0281bc2649782f39abdff8151e0c016bf26b3182a3df5c8579e21fe65d7 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.082393841Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1438]: time="2024-08-07T17:55:43.098416193Z" level=info msg="ignoring event" container=b9283200bae35965a0ee1c5549a6004ce0c7d70f70b9a374e41cd065d4d5dec3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.099709637Z" level=info msg="shim disconnected" id=b9283200bae35965a0ee1c5549a6004ce0c7d70f70b9a374e41cd065d4d5dec3 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.099819341Z" level=warning msg="cleaning up after shim disconnected" id=b9283200bae35965a0ee1c5549a6004ce0c7d70f70b9a374e41cd065d4d5dec3 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.099888243Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.121832799Z" level=info msg="shim disconnected" id=03079679d68cc19538e02f39c48251b50393f7dd981e609af3dcc96c420cd29e namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.121978304Z" level=warning msg="cleaning up after shim disconnected" id=03079679d68cc19538e02f39c48251b50393f7dd981e609af3dcc96c420cd29e namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.122200511Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.122413919Z" level=info msg="shim disconnected" id=1ca5873cb027b5d4205ca4cdff1109b4f47c0b8c5f18ae986a0b932c8d9584f4 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.122477321Z" level=warning msg="cleaning up after shim disconnected" id=1ca5873cb027b5d4205ca4cdff1109b4f47c0b8c5f18ae986a0b932c8d9584f4 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.122491521Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1438]: time="2024-08-07T17:55:43.122750830Z" level=info msg="ignoring event" container=1ca5873cb027b5d4205ca4cdff1109b4f47c0b8c5f18ae986a0b932c8d9584f4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:55:43 functional-100700 dockerd[1438]: time="2024-08-07T17:55:43.122822433Z" level=info msg="ignoring event" container=03079679d68cc19538e02f39c48251b50393f7dd981e609af3dcc96c420cd29e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.132577069Z" level=info msg="shim disconnected" id=9f7b90986285c55035e0f34c86f5eaccab75a819c487d2fb0ea04853921c13bf namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.132832477Z" level=warning msg="cleaning up after shim disconnected" id=9f7b90986285c55035e0f34c86f5eaccab75a819c487d2fb0ea04853921c13bf namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.132974882Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.138866285Z" level=info msg="shim disconnected" id=f907706c00ebe4149920447890732b438dd707eb66023282c7b66ac64e2185b5 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.139038791Z" level=warning msg="cleaning up after shim disconnected" id=f907706c00ebe4149920447890732b438dd707eb66023282c7b66ac64e2185b5 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.139187396Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.155113444Z" level=info msg="shim disconnected" id=a334b1535e2f7b257588b80636584105387ba212b623a89972b0a362d62c0504 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.155224248Z" level=warning msg="cleaning up after shim disconnected" id=a334b1535e2f7b257588b80636584105387ba212b623a89972b0a362d62c0504 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.155238049Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1438]: time="2024-08-07T17:55:43.155389654Z" level=info msg="ignoring event" container=f907706c00ebe4149920447890732b438dd707eb66023282c7b66ac64e2185b5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:55:43 functional-100700 dockerd[1438]: time="2024-08-07T17:55:43.155569360Z" level=info msg="ignoring event" container=9f7b90986285c55035e0f34c86f5eaccab75a819c487d2fb0ea04853921c13bf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:55:43 functional-100700 dockerd[1438]: time="2024-08-07T17:55:43.155886271Z" level=info msg="ignoring event" container=a334b1535e2f7b257588b80636584105387ba212b623a89972b0a362d62c0504 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.169713847Z" level=info msg="shim disconnected" id=32e3bea2a931545b3b3164d713e35af3f8439f208d9a2909552dc971a61ca84a namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.170314567Z" level=warning msg="cleaning up after shim disconnected" id=32e3bea2a931545b3b3164d713e35af3f8439f208d9a2909552dc971a61ca84a namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.170672780Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1438]: time="2024-08-07T17:55:43.183709228Z" level=info msg="ignoring event" container=32e3bea2a931545b3b3164d713e35af3f8439f208d9a2909552dc971a61ca84a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:55:43 functional-100700 dockerd[1438]: time="2024-08-07T17:55:43.186931639Z" level=info msg="ignoring event" container=8e6d65d222ddafeb197a6bdeeb312ce4cb757e9ed4da126acc744cc3b9f1550e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.187130746Z" level=info msg="shim disconnected" id=8e6d65d222ddafeb197a6bdeeb312ce4cb757e9ed4da126acc744cc3b9f1550e namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.187455057Z" level=warning msg="cleaning up after shim disconnected" id=8e6d65d222ddafeb197a6bdeeb312ce4cb757e9ed4da126acc744cc3b9f1550e namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.187626563Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.203552411Z" level=info msg="shim disconnected" id=6f09e3713754243813d9c0717cdb77e8bedfc4c511d31490f7ba369a8d1bcc06 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.204052129Z" level=warning msg="cleaning up after shim disconnected" id=6f09e3713754243813d9c0717cdb77e8bedfc4c511d31490f7ba369a8d1bcc06 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.204312938Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1438]: time="2024-08-07T17:55:43.210829062Z" level=info msg="ignoring event" container=88ef6e03a7d49636d4ec885944dcc92b4c233c0740da2e8f511ce9078e817eb9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:55:43 functional-100700 dockerd[1438]: time="2024-08-07T17:55:43.210944966Z" level=info msg="ignoring event" container=6f09e3713754243813d9c0717cdb77e8bedfc4c511d31490f7ba369a8d1bcc06 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.210804861Z" level=info msg="shim disconnected" id=88ef6e03a7d49636d4ec885944dcc92b4c233c0740da2e8f511ce9078e817eb9 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.211582688Z" level=warning msg="cleaning up after shim disconnected" id=88ef6e03a7d49636d4ec885944dcc92b4c233c0740da2e8f511ce9078e817eb9 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.211710392Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.243702093Z" level=info msg="shim disconnected" id=4f7e1db775dc208f8832a04d1b684f4ab8f803d4f68869549effeeb024f11b10 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1438]: time="2024-08-07T17:55:43.244412118Z" level=info msg="ignoring event" container=4f7e1db775dc208f8832a04d1b684f4ab8f803d4f68869549effeeb024f11b10 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.248240650Z" level=warning msg="cleaning up after shim disconnected" id=4f7e1db775dc208f8832a04d1b684f4ab8f803d4f68869549effeeb024f11b10 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.248409455Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:47 functional-100700 dockerd[1438]: time="2024-08-07T17:55:47.969145341Z" level=info msg="ignoring event" container=8257548df8d0db4a3326d585a67cf2b31ec7c8b63a2fb41d0009d9c5fa124c3b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:55:47 functional-100700 dockerd[1444]: time="2024-08-07T17:55:47.970761397Z" level=info msg="shim disconnected" id=8257548df8d0db4a3326d585a67cf2b31ec7c8b63a2fb41d0009d9c5fa124c3b namespace=moby
	Aug 07 17:55:47 functional-100700 dockerd[1444]: time="2024-08-07T17:55:47.971219213Z" level=warning msg="cleaning up after shim disconnected" id=8257548df8d0db4a3326d585a67cf2b31ec7c8b63a2fb41d0009d9c5fa124c3b namespace=moby
	Aug 07 17:55:47 functional-100700 dockerd[1444]: time="2024-08-07T17:55:47.973093477Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:52 functional-100700 dockerd[1438]: time="2024-08-07T17:55:52.961783600Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=76120dfe1c32eb89c2bb01fc16ebaad087df9e517d03ed5169fb50db579cfe65
	Aug 07 17:55:53 functional-100700 dockerd[1444]: time="2024-08-07T17:55:53.004669561Z" level=info msg="shim disconnected" id=76120dfe1c32eb89c2bb01fc16ebaad087df9e517d03ed5169fb50db579cfe65 namespace=moby
	Aug 07 17:55:53 functional-100700 dockerd[1444]: time="2024-08-07T17:55:53.004855260Z" level=warning msg="cleaning up after shim disconnected" id=76120dfe1c32eb89c2bb01fc16ebaad087df9e517d03ed5169fb50db579cfe65 namespace=moby
	Aug 07 17:55:53 functional-100700 dockerd[1444]: time="2024-08-07T17:55:53.004935360Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:53 functional-100700 dockerd[1438]: time="2024-08-07T17:55:53.005290559Z" level=info msg="ignoring event" container=76120dfe1c32eb89c2bb01fc16ebaad087df9e517d03ed5169fb50db579cfe65 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:55:53 functional-100700 dockerd[1438]: time="2024-08-07T17:55:53.082366606Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 07 17:55:53 functional-100700 dockerd[1438]: time="2024-08-07T17:55:53.082484305Z" level=info msg="Daemon shutdown complete"
	Aug 07 17:55:53 functional-100700 dockerd[1438]: time="2024-08-07T17:55:53.082557905Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 07 17:55:53 functional-100700 dockerd[1438]: time="2024-08-07T17:55:53.082900804Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 07 17:55:54 functional-100700 systemd[1]: docker.service: Deactivated successfully.
	Aug 07 17:55:54 functional-100700 systemd[1]: Stopped Docker Application Container Engine.
	Aug 07 17:55:54 functional-100700 systemd[1]: docker.service: Consumed 6.104s CPU time.
	Aug 07 17:55:54 functional-100700 systemd[1]: Starting Docker Application Container Engine...
	Aug 07 17:55:54 functional-100700 dockerd[4430]: time="2024-08-07T17:55:54.151963549Z" level=info msg="Starting up"
	Aug 07 17:55:54 functional-100700 dockerd[4430]: time="2024-08-07T17:55:54.153307848Z" level=info msg="containerd not running, starting managed containerd"
	Aug 07 17:55:54 functional-100700 dockerd[4430]: time="2024-08-07T17:55:54.154476946Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=4437
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.189116214Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.217487588Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.217672488Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.217724888Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.217743588Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.217923788Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.218099587Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.218341687Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.218462487Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.218487087Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.218500687Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.218536087Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.218676087Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.221803184Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.221937684Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.222170584Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.222318784Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.222351783Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.222377583Z" level=info msg="metadata content store policy set" policy=shared
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.222707583Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.222769383Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.222791383Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.222824883Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.222843583Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.223037183Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.223447282Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.223700782Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224128482Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224153982Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224211182Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224225882Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224249282Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224283382Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224302582Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224321982Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224336982Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224348882Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224375782Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224415382Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224431982Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224446182Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224464282Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224487782Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224502981Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224516181Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224556381Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224572481Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224585181Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224597481Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224611681Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224628781Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224653481Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224668081Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224681281Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.225080281Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.225134081Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.225150081Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.225165281Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.225176081Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.225190081Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.225200981Z" level=info msg="NRI interface is disabled by configuration."
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.225570080Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.225664580Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.225760880Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.225784780Z" level=info msg="containerd successfully booted in 0.038263s"
	Aug 07 17:55:55 functional-100700 dockerd[4430]: time="2024-08-07T17:55:55.203623721Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 07 17:55:55 functional-100700 dockerd[4430]: time="2024-08-07T17:55:55.249075279Z" level=info msg="Loading containers: start."
	Aug 07 17:55:55 functional-100700 dockerd[4430]: time="2024-08-07T17:55:55.486283283Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 07 17:55:55 functional-100700 dockerd[4430]: time="2024-08-07T17:55:55.611181043Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 07 17:55:55 functional-100700 dockerd[4430]: time="2024-08-07T17:55:55.728226393Z" level=info msg="Loading containers: done."
	Aug 07 17:55:55 functional-100700 dockerd[4430]: time="2024-08-07T17:55:55.754038026Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Aug 07 17:55:55 functional-100700 dockerd[4430]: time="2024-08-07T17:55:55.754150926Z" level=info msg="Daemon has completed initialization"
	Aug 07 17:55:55 functional-100700 dockerd[4430]: time="2024-08-07T17:55:55.805054292Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 07 17:55:55 functional-100700 systemd[1]: Started Docker Application Container Engine.
	Aug 07 17:55:55 functional-100700 dockerd[4430]: time="2024-08-07T17:55:55.817756408Z" level=info msg="API listen on [::]:2376"
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.526494067Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.526577168Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.526591169Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.526693470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.558712297Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.563728963Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.563952066Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.564587075Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.615923059Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.616533767Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.616742570Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.616989273Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.649839111Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.650906025Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.651080528Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.651280130Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.002162008Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.002309810Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.002327411Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.002784217Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.146319020Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.146402021Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.146419521Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.146546323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.186224804Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.186289605Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.186312905Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.186429907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.293246071Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.293400074Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.293416674Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.293513875Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:08 functional-100700 dockerd[4437]: time="2024-08-07T17:56:08.342920003Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:08 functional-100700 dockerd[4437]: time="2024-08-07T17:56:08.345412453Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:08 functional-100700 dockerd[4437]: time="2024-08-07T17:56:08.345440953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:08 functional-100700 dockerd[4437]: time="2024-08-07T17:56:08.346309071Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:08 functional-100700 dockerd[4437]: time="2024-08-07T17:56:08.427619805Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:08 functional-100700 dockerd[4437]: time="2024-08-07T17:56:08.427935011Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:08 functional-100700 dockerd[4437]: time="2024-08-07T17:56:08.427958412Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:08 functional-100700 dockerd[4437]: time="2024-08-07T17:56:08.428175716Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:08 functional-100700 dockerd[4437]: time="2024-08-07T17:56:08.450251060Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:08 functional-100700 dockerd[4437]: time="2024-08-07T17:56:08.450326762Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:08 functional-100700 dockerd[4437]: time="2024-08-07T17:56:08.450344662Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:08 functional-100700 dockerd[4437]: time="2024-08-07T17:56:08.450438364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:09 functional-100700 dockerd[4437]: time="2024-08-07T17:56:09.021378960Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:09 functional-100700 dockerd[4437]: time="2024-08-07T17:56:09.021447242Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:09 functional-100700 dockerd[4437]: time="2024-08-07T17:56:09.021467036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:09 functional-100700 dockerd[4437]: time="2024-08-07T17:56:09.021664985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:09 functional-100700 dockerd[4437]: time="2024-08-07T17:56:09.032269201Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:09 functional-100700 dockerd[4437]: time="2024-08-07T17:56:09.032481345Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:09 functional-100700 dockerd[4437]: time="2024-08-07T17:56:09.033742514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:09 functional-100700 dockerd[4437]: time="2024-08-07T17:56:09.034300967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:09 functional-100700 dockerd[4437]: time="2024-08-07T17:56:09.230710505Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:09 functional-100700 dockerd[4437]: time="2024-08-07T17:56:09.231303050Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:09 functional-100700 dockerd[4437]: time="2024-08-07T17:56:09.231404523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:09 functional-100700 dockerd[4437]: time="2024-08-07T17:56:09.231887696Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:59:53 functional-100700 systemd[1]: Stopping Docker Application Container Engine...
	Aug 07 17:59:53 functional-100700 dockerd[4430]: time="2024-08-07T17:59:53.701240101Z" level=info msg="Processing signal 'terminated'"
	Aug 07 17:59:53 functional-100700 dockerd[4430]: time="2024-08-07T17:59:53.891480424Z" level=info msg="ignoring event" container=011ec8239aa1c51c9f4776f7bfc897d8a3292c8e36986f970b1e2c2ca9da86fd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:59:53 functional-100700 dockerd[4430]: time="2024-08-07T17:59:53.891549927Z" level=info msg="ignoring event" container=b39dd511076447f842f8327dca684bf0a48d714c0f66e22029b88d889e068b15 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.892313355Z" level=info msg="shim disconnected" id=b39dd511076447f842f8327dca684bf0a48d714c0f66e22029b88d889e068b15 namespace=moby
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.892402158Z" level=warning msg="cleaning up after shim disconnected" id=b39dd511076447f842f8327dca684bf0a48d714c0f66e22029b88d889e068b15 namespace=moby
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.892417259Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.892816273Z" level=info msg="shim disconnected" id=011ec8239aa1c51c9f4776f7bfc897d8a3292c8e36986f970b1e2c2ca9da86fd namespace=moby
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.893079383Z" level=warning msg="cleaning up after shim disconnected" id=011ec8239aa1c51c9f4776f7bfc897d8a3292c8e36986f970b1e2c2ca9da86fd namespace=moby
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.893229989Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:59:53 functional-100700 dockerd[4430]: time="2024-08-07T17:59:53.943367240Z" level=info msg="ignoring event" container=333ea0f6bde6b1ef97dfdfac80a9d448a8898827c7df556176cd4ea6cc523a33 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.943759054Z" level=info msg="shim disconnected" id=333ea0f6bde6b1ef97dfdfac80a9d448a8898827c7df556176cd4ea6cc523a33 namespace=moby
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.943822956Z" level=warning msg="cleaning up after shim disconnected" id=333ea0f6bde6b1ef97dfdfac80a9d448a8898827c7df556176cd4ea6cc523a33 namespace=moby
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.943835757Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.963273574Z" level=info msg="shim disconnected" id=ee73743ff4167ab913696e97e9a77ab9a2b23dba89c1348473bb28a7089d45c2 namespace=moby
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.963426780Z" level=warning msg="cleaning up after shim disconnected" id=ee73743ff4167ab913696e97e9a77ab9a2b23dba89c1348473bb28a7089d45c2 namespace=moby
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.963795094Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:59:53 functional-100700 dockerd[4430]: time="2024-08-07T17:59:53.980683817Z" level=info msg="ignoring event" container=ee73743ff4167ab913696e97e9a77ab9a2b23dba89c1348473bb28a7089d45c2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:59:53 functional-100700 dockerd[4430]: time="2024-08-07T17:59:53.981327041Z" level=info msg="ignoring event" container=d57a72e940a3aff58bc3f9442b486233311d0f1b0b85603bed6541d03b52640b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:59:53 functional-100700 dockerd[4430]: time="2024-08-07T17:59:53.981517248Z" level=info msg="ignoring event" container=ff33a3021f789dbad1bd68bc673700c1dc540ec4a83d74c98c4915f4c66932e7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.983088406Z" level=info msg="shim disconnected" id=d57a72e940a3aff58bc3f9442b486233311d0f1b0b85603bed6541d03b52640b namespace=moby
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.983163809Z" level=warning msg="cleaning up after shim disconnected" id=d57a72e940a3aff58bc3f9442b486233311d0f1b0b85603bed6541d03b52640b namespace=moby
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.983176709Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4430]: time="2024-08-07T17:59:54.002058106Z" level=info msg="ignoring event" container=60d38309b3f444639ea52ae1b7c219816eaed13bbb7eb6ef22b5ecb5ee8fc804 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.002564025Z" level=info msg="shim disconnected" id=60d38309b3f444639ea52ae1b7c219816eaed13bbb7eb6ef22b5ecb5ee8fc804 namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.003062843Z" level=warning msg="cleaning up after shim disconnected" id=60d38309b3f444639ea52ae1b7c219816eaed13bbb7eb6ef22b5ecb5ee8fc804 namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.009295273Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.013796640Z" level=info msg="shim disconnected" id=623399e23aa3ba701f8a621854544b48498da36bba773f8911042b6d2561b57c namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.019740659Z" level=warning msg="cleaning up after shim disconnected" id=623399e23aa3ba701f8a621854544b48498da36bba773f8911042b6d2561b57c namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.019785161Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.008834456Z" level=info msg="shim disconnected" id=ff33a3021f789dbad1bd68bc673700c1dc540ec4a83d74c98c4915f4c66932e7 namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.021492824Z" level=warning msg="cleaning up after shim disconnected" id=ff33a3021f789dbad1bd68bc673700c1dc540ec4a83d74c98c4915f4c66932e7 namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.021550026Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4430]: time="2024-08-07T17:59:54.031232683Z" level=info msg="ignoring event" container=a781cd4bdb89586cb36144096a96bd794ac5f7417bddb500925aa03f7a83ee76 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:59:54 functional-100700 dockerd[4430]: time="2024-08-07T17:59:54.031289685Z" level=info msg="ignoring event" container=241a3e17f71d6549f6693cc1faa13d1a82113d0126d862a38ff582ea671e7704 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:59:54 functional-100700 dockerd[4430]: time="2024-08-07T17:59:54.031323187Z" level=info msg="ignoring event" container=0a22b12bc48412adcefcda1aa5f0176acba34d5ef617a5fc6e81727d4bf2fb9f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:59:54 functional-100700 dockerd[4430]: time="2024-08-07T17:59:54.031338987Z" level=info msg="ignoring event" container=125a3650c7bb9ceb749c5379c1700ef34f0671035364a39c7f8a55a9e244527f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:59:54 functional-100700 dockerd[4430]: time="2024-08-07T17:59:54.031357688Z" level=info msg="ignoring event" container=623399e23aa3ba701f8a621854544b48498da36bba773f8911042b6d2561b57c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.016580642Z" level=info msg="shim disconnected" id=a781cd4bdb89586cb36144096a96bd794ac5f7417bddb500925aa03f7a83ee76 namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.033182755Z" level=info msg="shim disconnected" id=125a3650c7bb9ceb749c5379c1700ef34f0671035364a39c7f8a55a9e244527f namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.035991259Z" level=warning msg="cleaning up after shim disconnected" id=125a3650c7bb9ceb749c5379c1700ef34f0671035364a39c7f8a55a9e244527f namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.036075162Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.036361773Z" level=warning msg="cleaning up after shim disconnected" id=a781cd4bdb89586cb36144096a96bd794ac5f7417bddb500925aa03f7a83ee76 namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.036396774Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.015009784Z" level=info msg="shim disconnected" id=241a3e17f71d6549f6693cc1faa13d1a82113d0126d862a38ff582ea671e7704 namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.040268717Z" level=warning msg="cleaning up after shim disconnected" id=241a3e17f71d6549f6693cc1faa13d1a82113d0126d862a38ff582ea671e7704 namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.040483025Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.017838089Z" level=info msg="shim disconnected" id=0a22b12bc48412adcefcda1aa5f0176acba34d5ef617a5fc6e81727d4bf2fb9f namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.056017998Z" level=warning msg="cleaning up after shim disconnected" id=0a22b12bc48412adcefcda1aa5f0176acba34d5ef617a5fc6e81727d4bf2fb9f namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.056073300Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:59:58 functional-100700 dockerd[4430]: time="2024-08-07T17:59:58.843549639Z" level=info msg="ignoring event" container=3bc20896cf9b1f1d0380b658e8e185cf6d0bedfce8143c9b50eb86a711c71a45 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:59:58 functional-100700 dockerd[4437]: time="2024-08-07T17:59:58.844293967Z" level=info msg="shim disconnected" id=3bc20896cf9b1f1d0380b658e8e185cf6d0bedfce8143c9b50eb86a711c71a45 namespace=moby
	Aug 07 17:59:58 functional-100700 dockerd[4437]: time="2024-08-07T17:59:58.844701482Z" level=warning msg="cleaning up after shim disconnected" id=3bc20896cf9b1f1d0380b658e8e185cf6d0bedfce8143c9b50eb86a711c71a45 namespace=moby
	Aug 07 17:59:58 functional-100700 dockerd[4437]: time="2024-08-07T17:59:58.845283503Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 18:00:03 functional-100700 dockerd[4430]: time="2024-08-07T18:00:03.891107278Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=ceb9a86ed09ccf71c371818d759e450667e70979c75506a5233093a4e3efe077
	Aug 07 18:00:03 functional-100700 dockerd[4430]: time="2024-08-07T18:00:03.954477534Z" level=info msg="ignoring event" container=ceb9a86ed09ccf71c371818d759e450667e70979c75506a5233093a4e3efe077 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 18:00:03 functional-100700 dockerd[4437]: time="2024-08-07T18:00:03.957207011Z" level=info msg="shim disconnected" id=ceb9a86ed09ccf71c371818d759e450667e70979c75506a5233093a4e3efe077 namespace=moby
	Aug 07 18:00:03 functional-100700 dockerd[4437]: time="2024-08-07T18:00:03.957346010Z" level=warning msg="cleaning up after shim disconnected" id=ceb9a86ed09ccf71c371818d759e450667e70979c75506a5233093a4e3efe077 namespace=moby
	Aug 07 18:00:03 functional-100700 dockerd[4437]: time="2024-08-07T18:00:03.957506408Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 18:00:04 functional-100700 dockerd[4430]: time="2024-08-07T18:00:04.021732016Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 07 18:00:04 functional-100700 dockerd[4430]: time="2024-08-07T18:00:04.022302513Z" level=info msg="Daemon shutdown complete"
	Aug 07 18:00:04 functional-100700 dockerd[4430]: time="2024-08-07T18:00:04.022522311Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 07 18:00:04 functional-100700 dockerd[4430]: time="2024-08-07T18:00:04.022549211Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 07 18:00:05 functional-100700 systemd[1]: docker.service: Deactivated successfully.
	Aug 07 18:00:05 functional-100700 systemd[1]: Stopped Docker Application Container Engine.
	Aug 07 18:00:05 functional-100700 systemd[1]: docker.service: Consumed 8.144s CPU time.
	Aug 07 18:00:05 functional-100700 systemd[1]: Starting Docker Application Container Engine...
	Aug 07 18:00:05 functional-100700 dockerd[8031]: time="2024-08-07T18:00:05.087273572Z" level=info msg="Starting up"
	Aug 07 18:01:05 functional-100700 dockerd[8031]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 07 18:01:05 functional-100700 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 07 18:01:05 functional-100700 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 07 18:01:05 functional-100700 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0807 18:01:05.216074    2092 out.go:239] * 
	W0807 18:01:05.217856    2092 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0807 18:01:05.222151    2092 out.go:177] 
	
	
	==> Docker <==
	Aug 07 18:22:10 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:22:10Z" level=error msg="Set backoffDuration to : 1m0s for container ID '623399e23aa3ba701f8a621854544b48498da36bba773f8911042b6d2561b57c'"
	Aug 07 18:22:10 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:22:10Z" level=error msg="error getting RW layer size for container ID 'a781cd4bdb89586cb36144096a96bd794ac5f7417bddb500925aa03f7a83ee76': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/a781cd4bdb89586cb36144096a96bd794ac5f7417bddb500925aa03f7a83ee76/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:22:10 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:22:10Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'a781cd4bdb89586cb36144096a96bd794ac5f7417bddb500925aa03f7a83ee76'"
	Aug 07 18:22:10 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:22:10Z" level=error msg="error getting RW layer size for container ID '60d38309b3f444639ea52ae1b7c219816eaed13bbb7eb6ef22b5ecb5ee8fc804': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/60d38309b3f444639ea52ae1b7c219816eaed13bbb7eb6ef22b5ecb5ee8fc804/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:22:10 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:22:10Z" level=error msg="Set backoffDuration to : 1m0s for container ID '60d38309b3f444639ea52ae1b7c219816eaed13bbb7eb6ef22b5ecb5ee8fc804'"
	Aug 07 18:22:10 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:22:10Z" level=error msg="error getting RW layer size for container ID '03079679d68cc19538e02f39c48251b50393f7dd981e609af3dcc96c420cd29e': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/03079679d68cc19538e02f39c48251b50393f7dd981e609af3dcc96c420cd29e/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:22:10 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:22:10Z" level=error msg="Set backoffDuration to : 1m0s for container ID '03079679d68cc19538e02f39c48251b50393f7dd981e609af3dcc96c420cd29e'"
	Aug 07 18:22:10 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:22:10Z" level=error msg="error getting RW layer size for container ID 'ceb9a86ed09ccf71c371818d759e450667e70979c75506a5233093a4e3efe077': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/ceb9a86ed09ccf71c371818d759e450667e70979c75506a5233093a4e3efe077/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:22:10 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:22:10Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'ceb9a86ed09ccf71c371818d759e450667e70979c75506a5233093a4e3efe077'"
	Aug 07 18:22:10 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:22:10Z" level=error msg="error getting RW layer size for container ID '88ef6e03a7d49636d4ec885944dcc92b4c233c0740da2e8f511ce9078e817eb9': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/88ef6e03a7d49636d4ec885944dcc92b4c233c0740da2e8f511ce9078e817eb9/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:22:10 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:22:10Z" level=error msg="Set backoffDuration to : 1m0s for container ID '88ef6e03a7d49636d4ec885944dcc92b4c233c0740da2e8f511ce9078e817eb9'"
	Aug 07 18:22:10 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:22:10Z" level=error msg="error getting RW layer size for container ID '8e6d65d222ddafeb197a6bdeeb312ce4cb757e9ed4da126acc744cc3b9f1550e': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/8e6d65d222ddafeb197a6bdeeb312ce4cb757e9ed4da126acc744cc3b9f1550e/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:22:10 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:22:10Z" level=error msg="Set backoffDuration to : 1m0s for container ID '8e6d65d222ddafeb197a6bdeeb312ce4cb757e9ed4da126acc744cc3b9f1550e'"
	Aug 07 18:22:10 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:22:10Z" level=error msg="error getting RW layer size for container ID '333ea0f6bde6b1ef97dfdfac80a9d448a8898827c7df556176cd4ea6cc523a33': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/333ea0f6bde6b1ef97dfdfac80a9d448a8898827c7df556176cd4ea6cc523a33/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:22:10 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:22:10Z" level=error msg="Set backoffDuration to : 1m0s for container ID '333ea0f6bde6b1ef97dfdfac80a9d448a8898827c7df556176cd4ea6cc523a33'"
	Aug 07 18:22:10 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:22:10Z" level=error msg="error getting RW layer size for container ID '76120dfe1c32eb89c2bb01fc16ebaad087df9e517d03ed5169fb50db579cfe65': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/76120dfe1c32eb89c2bb01fc16ebaad087df9e517d03ed5169fb50db579cfe65/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:22:10 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:22:10Z" level=error msg="Set backoffDuration to : 1m0s for container ID '76120dfe1c32eb89c2bb01fc16ebaad087df9e517d03ed5169fb50db579cfe65'"
	Aug 07 18:22:10 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:22:10Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get image list from docker"
	Aug 07 18:22:10 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:22:10Z" level=error msg="error getting RW layer size for container ID '9f7b90986285c55035e0f34c86f5eaccab75a819c487d2fb0ea04853921c13bf': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/9f7b90986285c55035e0f34c86f5eaccab75a819c487d2fb0ea04853921c13bf/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:22:10 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:22:10Z" level=error msg="Set backoffDuration to : 1m0s for container ID '9f7b90986285c55035e0f34c86f5eaccab75a819c487d2fb0ea04853921c13bf'"
	Aug 07 18:22:10 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:22:10Z" level=error msg="error getting RW layer size for container ID '8257548df8d0db4a3326d585a67cf2b31ec7c8b63a2fb41d0009d9c5fa124c3b': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/8257548df8d0db4a3326d585a67cf2b31ec7c8b63a2fb41d0009d9c5fa124c3b/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:22:10 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:22:10Z" level=error msg="Set backoffDuration to : 1m0s for container ID '8257548df8d0db4a3326d585a67cf2b31ec7c8b63a2fb41d0009d9c5fa124c3b'"
	Aug 07 18:22:10 functional-100700 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 07 18:22:10 functional-100700 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 07 18:22:10 functional-100700 systemd[1]: Failed to start Docker Application Container Engine.
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-08-07T18:22:12Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.315437] systemd-fstab-generator[4011]: Ignoring "noauto" option for root device
	[  +5.331377] kauditd_printk_skb: 89 callbacks suppressed
	[  +8.070413] systemd-fstab-generator[4649]: Ignoring "noauto" option for root device
	[  +0.207492] systemd-fstab-generator[4660]: Ignoring "noauto" option for root device
	[  +0.210288] systemd-fstab-generator[4672]: Ignoring "noauto" option for root device
	[  +0.283847] systemd-fstab-generator[4687]: Ignoring "noauto" option for root device
	[  +0.989188] systemd-fstab-generator[4863]: Ignoring "noauto" option for root device
	[Aug 7 17:56] systemd-fstab-generator[4990]: Ignoring "noauto" option for root device
	[  +0.108824] kauditd_printk_skb: 137 callbacks suppressed
	[  +6.503684] kauditd_printk_skb: 52 callbacks suppressed
	[ +11.988991] systemd-fstab-generator[5884]: Ignoring "noauto" option for root device
	[  +0.145733] kauditd_printk_skb: 31 callbacks suppressed
	[Aug 7 17:59] systemd-fstab-generator[7536]: Ignoring "noauto" option for root device
	[  +0.151220] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.513441] systemd-fstab-generator[7585]: Ignoring "noauto" option for root device
	[  +0.282699] systemd-fstab-generator[7598]: Ignoring "noauto" option for root device
	[  +0.340219] systemd-fstab-generator[7612]: Ignoring "noauto" option for root device
	[  +5.344642] kauditd_printk_skb: 89 callbacks suppressed
	[Aug 7 18:12] systemd-fstab-generator[11104]: Ignoring "noauto" option for root device
	[Aug 7 18:13] systemd-fstab-generator[11514]: Ignoring "noauto" option for root device
	[  +0.128359] kauditd_printk_skb: 12 callbacks suppressed
	[Aug 7 18:17] systemd-fstab-generator[12657]: Ignoring "noauto" option for root device
	[  +0.123972] kauditd_printk_skb: 12 callbacks suppressed
	[Aug 7 18:18] systemd-fstab-generator[13086]: Ignoring "noauto" option for root device
	[  +0.146348] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 18:23:10 up 32 min,  0 users,  load average: 0.06, 0.08, 0.07
	Linux functional-100700 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Aug 07 18:23:02 functional-100700 kubelet[4998]: E0807 18:23:02.177400    4998 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-100700\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-100700?resourceVersion=0&timeout=10s\": dial tcp 172.28.235.211:8441: connect: connection refused"
	Aug 07 18:23:02 functional-100700 kubelet[4998]: E0807 18:23:02.178451    4998 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-100700\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-100700?timeout=10s\": dial tcp 172.28.235.211:8441: connect: connection refused"
	Aug 07 18:23:02 functional-100700 kubelet[4998]: E0807 18:23:02.179782    4998 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-100700\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-100700?timeout=10s\": dial tcp 172.28.235.211:8441: connect: connection refused"
	Aug 07 18:23:02 functional-100700 kubelet[4998]: E0807 18:23:02.180779    4998 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-100700\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-100700?timeout=10s\": dial tcp 172.28.235.211:8441: connect: connection refused"
	Aug 07 18:23:02 functional-100700 kubelet[4998]: E0807 18:23:02.181836    4998 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-100700\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-100700?timeout=10s\": dial tcp 172.28.235.211:8441: connect: connection refused"
	Aug 07 18:23:02 functional-100700 kubelet[4998]: E0807 18:23:02.181964    4998 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	Aug 07 18:23:02 functional-100700 kubelet[4998]: E0807 18:23:02.537579    4998 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events/kube-apiserver-functional-100700.17e9841fb36ee3f5\": dial tcp 172.28.235.211:8441: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-functional-100700.17e9841fb36ee3f5  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-functional-100700,UID:00b3db9060a30b06edb713820a5caeb5,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://172.28.235.211:8441/readyz\": dial tcp 172.28.235.211:8441: connect: connection refused,Source:EventSource{Component:kubelet,Host:functional-100700,},FirstTimestamp:2024-08-07 18:00:04.135166965 +0000 UTC m=+242.626871687,LastTimes
tamp:2024-08-07 18:00:07.130040694 +0000 UTC m=+245.621745516,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-100700,}"
	Aug 07 18:23:04 functional-100700 kubelet[4998]: E0807 18:23:04.289215    4998 kubelet.go:2370] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 23m11.032163731s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	Aug 07 18:23:09 functional-100700 kubelet[4998]: E0807 18:23:09.062508    4998 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-100700?timeout=10s\": dial tcp 172.28.235.211:8441: connect: connection refused" interval="7s"
	Aug 07 18:23:09 functional-100700 kubelet[4998]: E0807 18:23:09.290231    4998 kubelet.go:2370] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 23m16.033185645s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	Aug 07 18:23:10 functional-100700 kubelet[4998]: E0807 18:23:10.392736    4998 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Aug 07 18:23:10 functional-100700 kubelet[4998]: E0807 18:23:10.392788    4998 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:23:10 functional-100700 kubelet[4998]: I0807 18:23:10.392803    4998 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:23:10 functional-100700 kubelet[4998]: E0807 18:23:10.392862    4998 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Aug 07 18:23:10 functional-100700 kubelet[4998]: E0807 18:23:10.392895    4998 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:23:10 functional-100700 kubelet[4998]: E0807 18:23:10.393065    4998 remote_image.go:232] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:23:10 functional-100700 kubelet[4998]: E0807 18:23:10.393112    4998 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:23:10 functional-100700 kubelet[4998]: E0807 18:23:10.393150    4998 remote_runtime.go:294] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Aug 07 18:23:10 functional-100700 kubelet[4998]: E0807 18:23:10.393175    4998 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:23:10 functional-100700 kubelet[4998]: E0807 18:23:10.393211    4998 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:23:10 functional-100700 kubelet[4998]: E0807 18:23:10.393251    4998 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Aug 07 18:23:10 functional-100700 kubelet[4998]: E0807 18:23:10.393277    4998 container_log_manager.go:194] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:23:10 functional-100700 kubelet[4998]: E0807 18:23:10.393782    4998 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Aug 07 18:23:10 functional-100700 kubelet[4998]: E0807 18:23:10.393829    4998 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Aug 07 18:23:10 functional-100700 kubelet[4998]: E0807 18:23:10.394230    4998 kubelet.go:1436] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0807 18:18:07.900480    3320 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0807 18:19:09.573362    3320 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0807 18:20:09.687573    3320 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0807 18:20:09.748153    3320 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0807 18:20:09.792758    3320 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0807 18:21:09.927791    3320 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0807 18:21:09.982345    3320 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0807 18:21:10.031019    3320 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0807 18:22:10.167917    3320 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-100700 -n functional-100700
E0807 18:23:20.478615    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\client.crt: The system cannot find the path specified.
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-100700 -n functional-100700: exit status 2 (13.2118778s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0807 18:23:11.627551    3352 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-100700" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (582.16s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (296.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-100700 replace --force -f testdata\mysql.yaml
functional_test.go:1789: (dbg) Non-zero exit: kubectl --context functional-100700 replace --force -f testdata\mysql.yaml: exit status 1 (4.2413818s)

                                                
                                                
** stderr ** 
	error when deleting "testdata\\mysql.yaml": Delete "https://172.28.235.211:8441/api/v1/namespaces/default/services/mysql": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
	error when deleting "testdata\\mysql.yaml": Delete "https://172.28.235.211:8441/apis/apps/v1/namespaces/default/deployments/mysql": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.

                                                
                                                
** /stderr **
functional_test.go:1791: failed to kubectl replace mysql: args "kubectl --context functional-100700 replace --force -f testdata\\mysql.yaml" failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-100700 -n functional-100700
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-100700 -n functional-100700: exit status 2 (13.3013155s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0807 18:12:29.858462    7300 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/MySQL]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-100700 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-100700 logs -n 25: (4m26.2824578s)
helpers_test.go:252: TestFunctional/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	|------------|--------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	|  Command   |                                   Args                                   |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|------------|--------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| ssh        | functional-100700 ssh sudo                                               | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:57 UTC | 07 Aug 24 17:57 UTC |
	|            | crictl images                                                            |                   |                   |         |                     |                     |
	| ssh        | functional-100700                                                        | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:57 UTC | 07 Aug 24 17:57 UTC |
	|            | ssh sudo docker rmi                                                      |                   |                   |         |                     |                     |
	|            | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| ssh        | functional-100700 ssh                                                    | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:57 UTC |                     |
	|            | sudo crictl inspecti                                                     |                   |                   |         |                     |                     |
	|            | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| cache      | functional-100700 cache reload                                           | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:57 UTC | 07 Aug 24 17:57 UTC |
	| ssh        | functional-100700 ssh                                                    | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:57 UTC | 07 Aug 24 17:57 UTC |
	|            | sudo crictl inspecti                                                     |                   |                   |         |                     |                     |
	|            | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| cache      | delete                                                                   | minikube          | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:57 UTC | 07 Aug 24 17:57 UTC |
	|            | registry.k8s.io/pause:3.1                                                |                   |                   |         |                     |                     |
	| cache      | delete                                                                   | minikube          | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:57 UTC | 07 Aug 24 17:57 UTC |
	|            | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| kubectl    | functional-100700 kubectl --                                             | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:57 UTC | 07 Aug 24 17:57 UTC |
	|            | --context functional-100700                                              |                   |                   |         |                     |                     |
	|            | get pods                                                                 |                   |                   |         |                     |                     |
	| start      | -p functional-100700                                                     | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:58 UTC |                     |
	|            | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                   |                   |         |                     |                     |
	|            | --wait=all                                                               |                   |                   |         |                     |                     |
	| license    |                                                                          | minikube          | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:12 UTC | 07 Aug 24 18:12 UTC |
	| ssh        | functional-100700 ssh echo                                               | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:12 UTC | 07 Aug 24 18:12 UTC |
	|            | hello                                                                    |                   |                   |         |                     |                     |
	| config     | functional-100700 config unset                                           | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:12 UTC | 07 Aug 24 18:12 UTC |
	|            | cpus                                                                     |                   |                   |         |                     |                     |
	| config     | functional-100700 config get                                             | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:12 UTC |                     |
	|            | cpus                                                                     |                   |                   |         |                     |                     |
	| config     | functional-100700 config set                                             | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:12 UTC | 07 Aug 24 18:12 UTC |
	|            | cpus 2                                                                   |                   |                   |         |                     |                     |
	| config     | functional-100700 config get                                             | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:12 UTC | 07 Aug 24 18:12 UTC |
	|            | cpus                                                                     |                   |                   |         |                     |                     |
	| config     | functional-100700 config unset                                           | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:12 UTC | 07 Aug 24 18:12 UTC |
	|            | cpus                                                                     |                   |                   |         |                     |                     |
	| config     | functional-100700 config get                                             | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:12 UTC |                     |
	|            | cpus                                                                     |                   |                   |         |                     |                     |
	| ssh        | functional-100700 ssh sudo cat                                           | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:12 UTC | 07 Aug 24 18:12 UTC |
	|            | /etc/test/nested/copy/9660/hosts                                         |                   |                   |         |                     |                     |
	| docker-env | functional-100700 docker-env                                             | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:12 UTC |                     |
	| ssh        | functional-100700 ssh sudo cat                                           | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:12 UTC | 07 Aug 24 18:12 UTC |
	|            | /etc/ssl/certs/9660.pem                                                  |                   |                   |         |                     |                     |
	| ssh        | functional-100700 ssh cat                                                | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:12 UTC | 07 Aug 24 18:12 UTC |
	|            | /etc/hostname                                                            |                   |                   |         |                     |                     |
	| ssh        | functional-100700 ssh sudo cat                                           | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:12 UTC | 07 Aug 24 18:12 UTC |
	|            | /usr/share/ca-certificates/9660.pem                                      |                   |                   |         |                     |                     |
	| cp         | functional-100700 cp                                                     | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:12 UTC | 07 Aug 24 18:12 UTC |
	|            | testdata\cp-test.txt                                                     |                   |                   |         |                     |                     |
	|            | /home/docker/cp-test.txt                                                 |                   |                   |         |                     |                     |
	| ssh        | functional-100700 ssh sudo cat                                           | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:12 UTC |                     |
	|            | /etc/ssl/certs/51391683.0                                                |                   |                   |         |                     |                     |
	| ssh        | functional-100700 ssh -n                                                 | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:12 UTC |                     |
	|            | functional-100700 sudo cat                                               |                   |                   |         |                     |                     |
	|            | /home/docker/cp-test.txt                                                 |                   |                   |         |                     |                     |
	|------------|--------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/07 17:58:33
	Running on machine: minikube6
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0807 17:58:33.249534    2092 out.go:291] Setting OutFile to fd 728 ...
	I0807 17:58:33.250111    2092 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 17:58:33.250111    2092 out.go:304] Setting ErrFile to fd 800...
	I0807 17:58:33.250179    2092 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 17:58:33.269540    2092 out.go:298] Setting JSON to false
	I0807 17:58:33.272574    2092 start.go:129] hostinfo: {"hostname":"minikube6","uptime":315442,"bootTime":1722738070,"procs":188,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4717 Build 19045.4717","kernelVersion":"10.0.19045.4717 Build 19045.4717","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0807 17:58:33.272574    2092 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0807 17:58:33.277577    2092 out.go:177] * [functional-100700] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4717 Build 19045.4717
	I0807 17:58:33.281028    2092 notify.go:220] Checking for updates...
	I0807 17:58:33.284043    2092 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0807 17:58:33.286477    2092 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0807 17:58:33.289594    2092 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0807 17:58:33.292627    2092 out.go:177]   - MINIKUBE_LOCATION=19389
	I0807 17:58:33.295302    2092 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0807 17:58:33.298825    2092 config.go:182] Loaded profile config "functional-100700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 17:58:33.298825    2092 driver.go:392] Setting default libvirt URI to qemu:///system
	I0807 17:58:38.714558    2092 out.go:177] * Using the hyperv driver based on existing profile
	I0807 17:58:38.718761    2092 start.go:297] selected driver: hyperv
	I0807 17:58:38.718761    2092 start.go:901] validating driver "hyperv" against &{Name:functional-100700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.3 ClusterName:functional-100700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.235.211 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 17:58:38.718761    2092 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0807 17:58:38.771046    2092 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0807 17:58:38.771115    2092 cni.go:84] Creating CNI manager for ""
	I0807 17:58:38.771115    2092 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0807 17:58:38.771254    2092 start.go:340] cluster config:
	{Name:functional-100700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-100700 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.235.211 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 17:58:38.771533    2092 iso.go:125] acquiring lock: {Name:mk51465eaa337f49a286b30986b5f3d5f63e6787 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 17:58:38.776902    2092 out.go:177] * Starting "functional-100700" primary control-plane node in "functional-100700" cluster
	I0807 17:58:38.780865    2092 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0807 17:58:38.780865    2092 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0807 17:58:38.780865    2092 cache.go:56] Caching tarball of preloaded images
	I0807 17:58:38.780865    2092 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0807 17:58:38.780865    2092 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0807 17:58:38.781934    2092 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-100700\config.json ...
	I0807 17:58:38.783866    2092 start.go:360] acquireMachinesLock for functional-100700: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 17:58:38.783866    2092 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-100700"
	I0807 17:58:38.783866    2092 start.go:96] Skipping create...Using existing machine configuration
	I0807 17:58:38.783866    2092 fix.go:54] fixHost starting: 
	I0807 17:58:38.784885    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:58:41.603969    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:58:41.604807    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:58:41.604807    2092 fix.go:112] recreateIfNeeded on functional-100700: state=Running err=<nil>
	W0807 17:58:41.604807    2092 fix.go:138] unexpected machine state, will restart: <nil>
	I0807 17:58:41.608533    2092 out.go:177] * Updating the running hyperv "functional-100700" VM ...
	I0807 17:58:41.613016    2092 machine.go:94] provisionDockerMachine start ...
	I0807 17:58:41.613016    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:58:43.839989    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:58:43.839989    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:58:43.840252    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:58:46.452954    2092 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:58:46.452954    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:58:46.459781    2092 main.go:141] libmachine: Using SSH client type: native
	I0807 17:58:46.460474    2092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.235.211 22 <nil> <nil>}
	I0807 17:58:46.460474    2092 main.go:141] libmachine: About to run SSH command:
	hostname
	I0807 17:58:46.591805    2092 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-100700
	
	I0807 17:58:46.591805    2092 buildroot.go:166] provisioning hostname "functional-100700"
	I0807 17:58:46.591805    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:58:48.755211    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:58:48.755427    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:58:48.755465    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:58:51.336039    2092 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:58:51.336039    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:58:51.342623    2092 main.go:141] libmachine: Using SSH client type: native
	I0807 17:58:51.342623    2092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.235.211 22 <nil> <nil>}
	I0807 17:58:51.342623    2092 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-100700 && echo "functional-100700" | sudo tee /etc/hostname
	I0807 17:58:51.496578    2092 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-100700
	
	I0807 17:58:51.496578    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:58:53.691439    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:58:53.691439    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:58:53.691439    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:58:56.301964    2092 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:58:56.301964    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:58:56.307512    2092 main.go:141] libmachine: Using SSH client type: native
	I0807 17:58:56.308194    2092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.235.211 22 <nil> <nil>}
	I0807 17:58:56.308194    2092 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-100700' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-100700/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-100700' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0807 17:58:56.438766    2092 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0807 17:58:56.438766    2092 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0807 17:58:56.438900    2092 buildroot.go:174] setting up certificates
	I0807 17:58:56.438900    2092 provision.go:84] configureAuth start
	I0807 17:58:56.438900    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:58:58.655995    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:58:58.656961    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:58:58.657071    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:59:01.290427    2092 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:59:01.290427    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:01.290831    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:59:03.469316    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:59:03.469316    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:03.469551    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:59:06.075723    2092 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:59:06.075723    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:06.075723    2092 provision.go:143] copyHostCerts
	I0807 17:59:06.075723    2092 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0807 17:59:06.075723    2092 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0807 17:59:06.076549    2092 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0807 17:59:06.077992    2092 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0807 17:59:06.077992    2092 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0807 17:59:06.078322    2092 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0807 17:59:06.079146    2092 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0807 17:59:06.079146    2092 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0807 17:59:06.079980    2092 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0807 17:59:06.080688    2092 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-100700 san=[127.0.0.1 172.28.235.211 functional-100700 localhost minikube]
	I0807 17:59:06.262311    2092 provision.go:177] copyRemoteCerts
	I0807 17:59:06.274334    2092 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0807 17:59:06.274334    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:59:08.466099    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:59:08.466421    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:08.466494    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:59:11.061934    2092 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:59:11.061934    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:11.061934    2092 sshutil.go:53] new ssh client: &{IP:172.28.235.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-100700\id_rsa Username:docker}
	I0807 17:59:11.172314    2092 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8979173s)
	I0807 17:59:11.172848    2092 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0807 17:59:11.223362    2092 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0807 17:59:11.271809    2092 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0807 17:59:11.319487    2092 provision.go:87] duration metric: took 14.8803963s to configureAuth
	I0807 17:59:11.319487    2092 buildroot.go:189] setting minikube options for container-runtime
	I0807 17:59:11.320542    2092 config.go:182] Loaded profile config "functional-100700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 17:59:11.320588    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:59:13.493491    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:59:13.493491    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:13.493879    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:59:16.077977    2092 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:59:16.077977    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:16.088668    2092 main.go:141] libmachine: Using SSH client type: native
	I0807 17:59:16.088783    2092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.235.211 22 <nil> <nil>}
	I0807 17:59:16.088783    2092 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0807 17:59:16.217785    2092 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0807 17:59:16.217785    2092 buildroot.go:70] root file system type: tmpfs
	I0807 17:59:16.218443    2092 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0807 17:59:16.218443    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:59:18.421400    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:59:18.421400    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:18.421838    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:59:21.023576    2092 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:59:21.024581    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:21.030466    2092 main.go:141] libmachine: Using SSH client type: native
	I0807 17:59:21.031160    2092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.235.211 22 <nil> <nil>}
	I0807 17:59:21.031160    2092 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0807 17:59:21.200213    2092 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0807 17:59:21.200853    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:59:23.460840    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:59:23.460840    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:23.461413    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:59:26.144922    2092 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:59:26.144922    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:26.151032    2092 main.go:141] libmachine: Using SSH client type: native
	I0807 17:59:26.151032    2092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.235.211 22 <nil> <nil>}
	I0807 17:59:26.151032    2092 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0807 17:59:26.288578    2092 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0807 17:59:26.288578    2092 machine.go:97] duration metric: took 44.6749905s to provisionDockerMachine
	I0807 17:59:26.289136    2092 start.go:293] postStartSetup for "functional-100700" (driver="hyperv")
	I0807 17:59:26.289136    2092 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0807 17:59:26.303659    2092 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0807 17:59:26.303659    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:59:28.549324    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:59:28.550453    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:28.550627    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:59:31.258516    2092 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:59:31.258516    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:31.259399    2092 sshutil.go:53] new ssh client: &{IP:172.28.235.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-100700\id_rsa Username:docker}
	I0807 17:59:31.366436    2092 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.0626314s)
	I0807 17:59:31.378993    2092 ssh_runner.go:195] Run: cat /etc/os-release
	I0807 17:59:31.386284    2092 info.go:137] Remote host: Buildroot 2023.02.9
	I0807 17:59:31.386284    2092 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0807 17:59:31.386889    2092 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0807 17:59:31.387933    2092 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96602.pem -> 96602.pem in /etc/ssl/certs
	I0807 17:59:31.388907    2092 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\9660\hosts -> hosts in /etc/test/nested/copy/9660
	I0807 17:59:31.400272    2092 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/9660
	I0807 17:59:31.419285    2092 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96602.pem --> /etc/ssl/certs/96602.pem (1708 bytes)
	I0807 17:59:31.469876    2092 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\9660\hosts --> /etc/test/nested/copy/9660/hosts (40 bytes)
	I0807 17:59:31.522816    2092 start.go:296] duration metric: took 5.2336131s for postStartSetup
	I0807 17:59:31.522964    2092 fix.go:56] duration metric: took 52.7384232s for fixHost
	I0807 17:59:31.522964    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:59:33.801714    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:59:33.801714    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:33.802069    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:59:36.454493    2092 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:59:36.455616    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:36.460762    2092 main.go:141] libmachine: Using SSH client type: native
	I0807 17:59:36.461590    2092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.235.211 22 <nil> <nil>}
	I0807 17:59:36.461590    2092 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0807 17:59:36.584817    2092 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723053576.599616702
	
	I0807 17:59:36.584817    2092 fix.go:216] guest clock: 1723053576.599616702
	I0807 17:59:36.584817    2092 fix.go:229] Guest: 2024-08-07 17:59:36.599616702 +0000 UTC Remote: 2024-08-07 17:59:31.5229646 +0000 UTC m=+58.443653901 (delta=5.076652102s)
	I0807 17:59:36.584817    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:59:38.789376    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:59:38.789376    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:38.789376    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:59:41.408403    2092 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:59:41.408403    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:41.415021    2092 main.go:141] libmachine: Using SSH client type: native
	I0807 17:59:41.415132    2092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.235.211 22 <nil> <nil>}
	I0807 17:59:41.415132    2092 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1723053576
	I0807 17:59:41.554262    2092 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Aug  7 17:59:36 UTC 2024
	
	I0807 17:59:41.554342    2092 fix.go:236] clock set: Wed Aug  7 17:59:36 UTC 2024
	 (err=<nil>)
	I0807 17:59:41.554342    2092 start.go:83] releasing machines lock for "functional-100700", held for 1m2.7696728s
	I0807 17:59:41.554664    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:59:43.743629    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:59:43.743629    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:43.743690    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:59:46.402358    2092 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:59:46.402358    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:46.408259    2092 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0807 17:59:46.408354    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:59:46.418508    2092 ssh_runner.go:195] Run: cat /version.json
	I0807 17:59:46.418508    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 17:59:48.678664    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:59:48.678664    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:48.678947    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:59:48.679407    2092 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 17:59:48.679407    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:48.679407    2092 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 17:59:51.480814    2092 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:59:51.481012    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:51.481427    2092 sshutil.go:53] new ssh client: &{IP:172.28.235.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-100700\id_rsa Username:docker}
	I0807 17:59:51.507337    2092 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 17:59:51.507337    2092 main.go:141] libmachine: [stderr =====>] : 
	I0807 17:59:51.508062    2092 sshutil.go:53] new ssh client: &{IP:172.28.235.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-100700\id_rsa Username:docker}
	I0807 17:59:51.568433    2092 ssh_runner.go:235] Completed: cat /version.json: (5.149744s)
	I0807 17:59:51.580326    2092 ssh_runner.go:195] Run: systemctl --version
	I0807 17:59:51.588096    2092 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.1797709s)
	W0807 17:59:51.588200    2092 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0807 17:59:51.605332    2092 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0807 17:59:51.614105    2092 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0807 17:59:51.625622    2092 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0807 17:59:51.647469    2092 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0807 17:59:51.647469    2092 start.go:495] detecting cgroup driver to use...
	I0807 17:59:51.647469    2092 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0807 17:59:51.698634    2092 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	W0807 17:59:51.702217    2092 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0807 17:59:51.702712    2092 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0807 17:59:51.742110    2092 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0807 17:59:51.763294    2092 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0807 17:59:51.777226    2092 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0807 17:59:51.810899    2092 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0807 17:59:51.842846    2092 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0807 17:59:51.874465    2092 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0807 17:59:51.906374    2092 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0807 17:59:51.940856    2092 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0807 17:59:51.972522    2092 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0807 17:59:52.005392    2092 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0807 17:59:52.039394    2092 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0807 17:59:52.069956    2092 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0807 17:59:52.100774    2092 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 17:59:52.376248    2092 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0807 17:59:52.411639    2092 start.go:495] detecting cgroup driver to use...
	I0807 17:59:52.424848    2092 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0807 17:59:52.465672    2092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0807 17:59:52.507105    2092 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0807 17:59:52.559294    2092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0807 17:59:52.602621    2092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0807 17:59:52.628877    2092 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0807 17:59:52.677947    2092 ssh_runner.go:195] Run: which cri-dockerd
	I0807 17:59:52.696445    2092 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0807 17:59:52.713779    2092 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0807 17:59:52.759506    2092 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0807 17:59:53.063312    2092 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0807 17:59:53.341833    2092 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0807 17:59:53.341833    2092 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0807 17:59:53.390184    2092 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 17:59:53.669002    2092 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0807 18:01:05.110860    2092 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.440852s)
	I0807 18:01:05.123373    2092 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0807 18:01:05.210998    2092 out.go:177] 
	W0807 18:01:05.214928    2092 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 07 17:52:16 functional-100700 systemd[1]: Starting Docker Application Container Engine...
	Aug 07 17:52:16 functional-100700 dockerd[674]: time="2024-08-07T17:52:16.458341985Z" level=info msg="Starting up"
	Aug 07 17:52:16 functional-100700 dockerd[674]: time="2024-08-07T17:52:16.459483937Z" level=info msg="containerd not running, starting managed containerd"
	Aug 07 17:52:16 functional-100700 dockerd[674]: time="2024-08-07T17:52:16.460719594Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=680
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.493113277Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.523238457Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.523275259Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.523339562Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.523356263Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.523427766Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.523446067Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.523804083Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.523901688Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.523925089Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.523938689Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.524034194Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.524376109Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.527352746Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.527600157Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.528068478Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.528219485Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.528416294Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.528629904Z" level=info msg="metadata content store policy set" policy=shared
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.581871643Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.581973248Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.582080552Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.582105454Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.582123954Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.582283562Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.582887889Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583040296Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583147301Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583169102Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583185303Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583200004Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583228805Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583248006Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583336710Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583450015Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583471716Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583486017Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583527319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583544019Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583560020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583595322Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583708227Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583728328Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583743029Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583769930Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583818932Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583858834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583890635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583921137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583935837Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.583972939Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.584001540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.584017041Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.584038742Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.584125046Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.584167548Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.584182949Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.584202250Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.584215250Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.584231651Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.584243051Z" level=info msg="NRI interface is disabled by configuration."
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.584478362Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.584610368Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.584865080Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 07 17:52:16 functional-100700 dockerd[680]: time="2024-08-07T17:52:16.585009287Z" level=info msg="containerd successfully booted in 0.093145s"
	Aug 07 17:52:17 functional-100700 dockerd[674]: time="2024-08-07T17:52:17.539100050Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 07 17:52:17 functional-100700 dockerd[674]: time="2024-08-07T17:52:17.582623719Z" level=info msg="Loading containers: start."
	Aug 07 17:52:17 functional-100700 dockerd[674]: time="2024-08-07T17:52:17.757771440Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 07 17:52:17 functional-100700 dockerd[674]: time="2024-08-07T17:52:17.984107768Z" level=info msg="Loading containers: done."
	Aug 07 17:52:18 functional-100700 dockerd[674]: time="2024-08-07T17:52:18.005649861Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Aug 07 17:52:18 functional-100700 dockerd[674]: time="2024-08-07T17:52:18.006524396Z" level=info msg="Daemon has completed initialization"
	Aug 07 17:52:18 functional-100700 dockerd[674]: time="2024-08-07T17:52:18.114597281Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 07 17:52:18 functional-100700 dockerd[674]: time="2024-08-07T17:52:18.114734986Z" level=info msg="API listen on [::]:2376"
	Aug 07 17:52:18 functional-100700 systemd[1]: Started Docker Application Container Engine.
	Aug 07 17:52:49 functional-100700 systemd[1]: Stopping Docker Application Container Engine...
	Aug 07 17:52:49 functional-100700 dockerd[674]: time="2024-08-07T17:52:49.863488345Z" level=info msg="Processing signal 'terminated'"
	Aug 07 17:52:49 functional-100700 dockerd[674]: time="2024-08-07T17:52:49.866317260Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 07 17:52:49 functional-100700 dockerd[674]: time="2024-08-07T17:52:49.866749062Z" level=info msg="Daemon shutdown complete"
	Aug 07 17:52:49 functional-100700 dockerd[674]: time="2024-08-07T17:52:49.867048363Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 07 17:52:49 functional-100700 dockerd[674]: time="2024-08-07T17:52:49.867142864Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 07 17:52:50 functional-100700 systemd[1]: docker.service: Deactivated successfully.
	Aug 07 17:52:50 functional-100700 systemd[1]: Stopped Docker Application Container Engine.
	Aug 07 17:52:50 functional-100700 systemd[1]: Starting Docker Application Container Engine...
	Aug 07 17:52:50 functional-100700 dockerd[1088]: time="2024-08-07T17:52:50.924908641Z" level=info msg="Starting up"
	Aug 07 17:52:50 functional-100700 dockerd[1088]: time="2024-08-07T17:52:50.926025447Z" level=info msg="containerd not running, starting managed containerd"
	Aug 07 17:52:50 functional-100700 dockerd[1088]: time="2024-08-07T17:52:50.927064452Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1094
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.958194110Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.986326653Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.986364954Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.986401554Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.986436154Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.986479654Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.986495054Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.986720855Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.986821556Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.986844356Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.986855856Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.986880556Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.987134958Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.990330074Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.990438474Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.990847676Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.990948477Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.991014577Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.991067378Z" level=info msg="metadata content store policy set" policy=shared
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.991319879Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.991378979Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.991397879Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.991412779Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.991428979Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.991496580Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.992185983Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.992409884Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.992647286Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.992672686Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.992688386Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.992794286Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993149888Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993243389Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993271489Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993292489Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993307089Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993318789Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993338389Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993353889Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993377989Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993393189Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993409289Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993422089Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993433690Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993445490Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993457890Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993471990Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993490190Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993561890Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993582490Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993597890Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993619090Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993632991Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993644091Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993764891Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993878492Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993897692Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.993910692Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.994016892Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.994112293Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.994155593Z" level=info msg="NRI interface is disabled by configuration."
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.994503995Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.994761996Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.994864197Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 07 17:52:50 functional-100700 dockerd[1094]: time="2024-08-07T17:52:50.995059098Z" level=info msg="containerd successfully booted in 0.037887s"
	Aug 07 17:52:51 functional-100700 dockerd[1088]: time="2024-08-07T17:52:51.976962789Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 07 17:52:52 functional-100700 dockerd[1088]: time="2024-08-07T17:52:52.014319979Z" level=info msg="Loading containers: start."
	Aug 07 17:52:52 functional-100700 dockerd[1088]: time="2024-08-07T17:52:52.153625988Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 07 17:52:52 functional-100700 dockerd[1088]: time="2024-08-07T17:52:52.280398732Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 07 17:52:52 functional-100700 dockerd[1088]: time="2024-08-07T17:52:52.383509956Z" level=info msg="Loading containers: done."
	Aug 07 17:52:52 functional-100700 dockerd[1088]: time="2024-08-07T17:52:52.407173376Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Aug 07 17:52:52 functional-100700 dockerd[1088]: time="2024-08-07T17:52:52.407304177Z" level=info msg="Daemon has completed initialization"
	Aug 07 17:52:52 functional-100700 dockerd[1088]: time="2024-08-07T17:52:52.460329447Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 07 17:52:52 functional-100700 dockerd[1088]: time="2024-08-07T17:52:52.460475947Z" level=info msg="API listen on [::]:2376"
	Aug 07 17:52:52 functional-100700 systemd[1]: Started Docker Application Container Engine.
	Aug 07 17:53:01 functional-100700 dockerd[1088]: time="2024-08-07T17:53:01.251394538Z" level=info msg="Processing signal 'terminated'"
	Aug 07 17:53:01 functional-100700 dockerd[1088]: time="2024-08-07T17:53:01.254204052Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 07 17:53:01 functional-100700 systemd[1]: Stopping Docker Application Container Engine...
	Aug 07 17:53:01 functional-100700 dockerd[1088]: time="2024-08-07T17:53:01.260155282Z" level=info msg="Daemon shutdown complete"
	Aug 07 17:53:01 functional-100700 dockerd[1088]: time="2024-08-07T17:53:01.260456984Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 07 17:53:01 functional-100700 dockerd[1088]: time="2024-08-07T17:53:01.260937686Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 07 17:53:02 functional-100700 systemd[1]: docker.service: Deactivated successfully.
	Aug 07 17:53:02 functional-100700 systemd[1]: Stopped Docker Application Container Engine.
	Aug 07 17:53:02 functional-100700 systemd[1]: Starting Docker Application Container Engine...
	Aug 07 17:53:02 functional-100700 dockerd[1438]: time="2024-08-07T17:53:02.321797079Z" level=info msg="Starting up"
	Aug 07 17:53:02 functional-100700 dockerd[1438]: time="2024-08-07T17:53:02.323692689Z" level=info msg="containerd not running, starting managed containerd"
	Aug 07 17:53:02 functional-100700 dockerd[1438]: time="2024-08-07T17:53:02.324967095Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1444
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.356801457Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.391549934Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.391620134Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.391684835Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.391704135Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.391750835Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.391768635Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.391946636Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.392117037Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.392142737Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.392156437Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.392185437Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.392310838Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.395280253Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.395439954Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.395604655Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.395701255Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.395733555Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.396053557Z" level=info msg="metadata content store policy set" policy=shared
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.396602860Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.396804261Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.396886161Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.396963461Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397040262Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397105662Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397382264Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397664265Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397760665Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397781966Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397796766Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397810066Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397829466Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397849466Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397864866Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397877866Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397890366Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397902266Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.397930866Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398086167Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398131667Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398146767Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398159868Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398173168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398186368Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398199368Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398230968Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398291368Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398305068Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398318768Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398347068Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398379069Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398633370Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398774671Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398795271Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.398837271Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.399058072Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.399114872Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.399134672Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.399145573Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.399188373Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.399202473Z" level=info msg="NRI interface is disabled by configuration."
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.399579475Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.399779376Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.399959877Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 07 17:53:02 functional-100700 dockerd[1444]: time="2024-08-07T17:53:02.400151978Z" level=info msg="containerd successfully booted in 0.045445s"
	Aug 07 17:53:03 functional-100700 dockerd[1438]: time="2024-08-07T17:53:03.371421015Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 07 17:53:06 functional-100700 dockerd[1438]: time="2024-08-07T17:53:06.638419724Z" level=info msg="Loading containers: start."
	Aug 07 17:53:06 functional-100700 dockerd[1438]: time="2024-08-07T17:53:06.762102252Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 07 17:53:06 functional-100700 dockerd[1438]: time="2024-08-07T17:53:06.878637045Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 07 17:53:06 functional-100700 dockerd[1438]: time="2024-08-07T17:53:06.979158756Z" level=info msg="Loading containers: done."
	Aug 07 17:53:07 functional-100700 dockerd[1438]: time="2024-08-07T17:53:07.006779696Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Aug 07 17:53:07 functional-100700 dockerd[1438]: time="2024-08-07T17:53:07.006939697Z" level=info msg="Daemon has completed initialization"
	Aug 07 17:53:07 functional-100700 dockerd[1438]: time="2024-08-07T17:53:07.050899521Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 07 17:53:07 functional-100700 dockerd[1438]: time="2024-08-07T17:53:07.051782725Z" level=info msg="API listen on [::]:2376"
	Aug 07 17:53:07 functional-100700 systemd[1]: Started Docker Application Container Engine.
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.457114123Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.457756947Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.457814849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.458624579Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.566929844Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.567055349Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.567075749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.567173853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.642620715Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.643094533Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.643146835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.647326788Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.647829406Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.647978912Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.648649436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.651222930Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.841899112Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.842260225Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.842357529Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.842730042Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.987249334Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.987542945Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.987581346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:16 functional-100700 dockerd[1444]: time="2024-08-07T17:53:16.987882057Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:17 functional-100700 dockerd[1444]: time="2024-08-07T17:53:17.071713287Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:17 functional-100700 dockerd[1444]: time="2024-08-07T17:53:17.072501915Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:17 functional-100700 dockerd[1444]: time="2024-08-07T17:53:17.072657920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:17 functional-100700 dockerd[1444]: time="2024-08-07T17:53:17.072778524Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:17 functional-100700 dockerd[1444]: time="2024-08-07T17:53:17.120557979Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:17 functional-100700 dockerd[1444]: time="2024-08-07T17:53:17.120838189Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:17 functional-100700 dockerd[1444]: time="2024-08-07T17:53:17.121035196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:17 functional-100700 dockerd[1444]: time="2024-08-07T17:53:17.121432210Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:39 functional-100700 dockerd[1444]: time="2024-08-07T17:53:39.836342825Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:39 functional-100700 dockerd[1444]: time="2024-08-07T17:53:39.836494127Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:39 functional-100700 dockerd[1444]: time="2024-08-07T17:53:39.836527028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:39 functional-100700 dockerd[1444]: time="2024-08-07T17:53:39.836919632Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:40 functional-100700 dockerd[1444]: time="2024-08-07T17:53:40.031505099Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:40 functional-100700 dockerd[1444]: time="2024-08-07T17:53:40.031971604Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:40 functional-100700 dockerd[1444]: time="2024-08-07T17:53:40.032036705Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:40 functional-100700 dockerd[1444]: time="2024-08-07T17:53:40.032230607Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:40 functional-100700 dockerd[1444]: time="2024-08-07T17:53:40.071740773Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:40 functional-100700 dockerd[1444]: time="2024-08-07T17:53:40.071807874Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:40 functional-100700 dockerd[1444]: time="2024-08-07T17:53:40.071821974Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:40 functional-100700 dockerd[1444]: time="2024-08-07T17:53:40.072043276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:40 functional-100700 dockerd[1444]: time="2024-08-07T17:53:40.388937110Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:40 functional-100700 dockerd[1444]: time="2024-08-07T17:53:40.389400016Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:40 functional-100700 dockerd[1444]: time="2024-08-07T17:53:40.389566918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:40 functional-100700 dockerd[1444]: time="2024-08-07T17:53:40.390025223Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:41 functional-100700 dockerd[1444]: time="2024-08-07T17:53:41.011330360Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:41 functional-100700 dockerd[1444]: time="2024-08-07T17:53:41.011407583Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:41 functional-100700 dockerd[1444]: time="2024-08-07T17:53:41.013870604Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:41 functional-100700 dockerd[1444]: time="2024-08-07T17:53:41.017327916Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:41 functional-100700 dockerd[1444]: time="2024-08-07T17:53:41.053458090Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:41 functional-100700 dockerd[1444]: time="2024-08-07T17:53:41.053872712Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:41 functional-100700 dockerd[1444]: time="2024-08-07T17:53:41.054119484Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:41 functional-100700 dockerd[1444]: time="2024-08-07T17:53:41.054800983Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:48 functional-100700 dockerd[1444]: time="2024-08-07T17:53:48.064470635Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:48 functional-100700 dockerd[1444]: time="2024-08-07T17:53:48.064566761Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:48 functional-100700 dockerd[1444]: time="2024-08-07T17:53:48.064584666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:48 functional-100700 dockerd[1444]: time="2024-08-07T17:53:48.064692294Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:48 functional-100700 dockerd[1444]: time="2024-08-07T17:53:48.363127500Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:53:48 functional-100700 dockerd[1444]: time="2024-08-07T17:53:48.363436282Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:53:48 functional-100700 dockerd[1444]: time="2024-08-07T17:53:48.363741263Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:48 functional-100700 dockerd[1444]: time="2024-08-07T17:53:48.364157974Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:53:51 functional-100700 dockerd[1438]: time="2024-08-07T17:53:51.247657321Z" level=info msg="ignoring event" container=9d5becd51e7186ec4acc242dcdf88a4327987ec048e0ee52430eac221c2fc0fe module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:53:51 functional-100700 dockerd[1444]: time="2024-08-07T17:53:51.250057633Z" level=info msg="shim disconnected" id=9d5becd51e7186ec4acc242dcdf88a4327987ec048e0ee52430eac221c2fc0fe namespace=moby
	Aug 07 17:53:51 functional-100700 dockerd[1444]: time="2024-08-07T17:53:51.250177263Z" level=warning msg="cleaning up after shim disconnected" id=9d5becd51e7186ec4acc242dcdf88a4327987ec048e0ee52430eac221c2fc0fe namespace=moby
	Aug 07 17:53:51 functional-100700 dockerd[1444]: time="2024-08-07T17:53:51.250194468Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:53:51 functional-100700 dockerd[1438]: time="2024-08-07T17:53:51.423182591Z" level=info msg="ignoring event" container=2a2add9d4c7dacc6804bc6f6e21cf8821ae80a5ef75964e66784019654fe3bc3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:53:51 functional-100700 dockerd[1444]: time="2024-08-07T17:53:51.423885070Z" level=info msg="shim disconnected" id=2a2add9d4c7dacc6804bc6f6e21cf8821ae80a5ef75964e66784019654fe3bc3 namespace=moby
	Aug 07 17:53:51 functional-100700 dockerd[1444]: time="2024-08-07T17:53:51.424164141Z" level=warning msg="cleaning up after shim disconnected" id=2a2add9d4c7dacc6804bc6f6e21cf8821ae80a5ef75964e66784019654fe3bc3 namespace=moby
	Aug 07 17:53:51 functional-100700 dockerd[1444]: time="2024-08-07T17:53:51.424226557Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:42 functional-100700 systemd[1]: Stopping Docker Application Container Engine...
	Aug 07 17:55:42 functional-100700 dockerd[1438]: time="2024-08-07T17:55:42.811970533Z" level=info msg="Processing signal 'terminated'"
	Aug 07 17:55:43 functional-100700 dockerd[1438]: time="2024-08-07T17:55:43.080390772Z" level=info msg="ignoring event" container=f87ac0281bc2649782f39abdff8151e0c016bf26b3182a3df5c8579e21fe65d7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.081622815Z" level=info msg="shim disconnected" id=f87ac0281bc2649782f39abdff8151e0c016bf26b3182a3df5c8579e21fe65d7 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.082180634Z" level=warning msg="cleaning up after shim disconnected" id=f87ac0281bc2649782f39abdff8151e0c016bf26b3182a3df5c8579e21fe65d7 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.082393841Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1438]: time="2024-08-07T17:55:43.098416193Z" level=info msg="ignoring event" container=b9283200bae35965a0ee1c5549a6004ce0c7d70f70b9a374e41cd065d4d5dec3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.099709637Z" level=info msg="shim disconnected" id=b9283200bae35965a0ee1c5549a6004ce0c7d70f70b9a374e41cd065d4d5dec3 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.099819341Z" level=warning msg="cleaning up after shim disconnected" id=b9283200bae35965a0ee1c5549a6004ce0c7d70f70b9a374e41cd065d4d5dec3 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.099888243Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.121832799Z" level=info msg="shim disconnected" id=03079679d68cc19538e02f39c48251b50393f7dd981e609af3dcc96c420cd29e namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.121978304Z" level=warning msg="cleaning up after shim disconnected" id=03079679d68cc19538e02f39c48251b50393f7dd981e609af3dcc96c420cd29e namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.122200511Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.122413919Z" level=info msg="shim disconnected" id=1ca5873cb027b5d4205ca4cdff1109b4f47c0b8c5f18ae986a0b932c8d9584f4 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.122477321Z" level=warning msg="cleaning up after shim disconnected" id=1ca5873cb027b5d4205ca4cdff1109b4f47c0b8c5f18ae986a0b932c8d9584f4 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.122491521Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1438]: time="2024-08-07T17:55:43.122750830Z" level=info msg="ignoring event" container=1ca5873cb027b5d4205ca4cdff1109b4f47c0b8c5f18ae986a0b932c8d9584f4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:55:43 functional-100700 dockerd[1438]: time="2024-08-07T17:55:43.122822433Z" level=info msg="ignoring event" container=03079679d68cc19538e02f39c48251b50393f7dd981e609af3dcc96c420cd29e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.132577069Z" level=info msg="shim disconnected" id=9f7b90986285c55035e0f34c86f5eaccab75a819c487d2fb0ea04853921c13bf namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.132832477Z" level=warning msg="cleaning up after shim disconnected" id=9f7b90986285c55035e0f34c86f5eaccab75a819c487d2fb0ea04853921c13bf namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.132974882Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.138866285Z" level=info msg="shim disconnected" id=f907706c00ebe4149920447890732b438dd707eb66023282c7b66ac64e2185b5 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.139038791Z" level=warning msg="cleaning up after shim disconnected" id=f907706c00ebe4149920447890732b438dd707eb66023282c7b66ac64e2185b5 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.139187396Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.155113444Z" level=info msg="shim disconnected" id=a334b1535e2f7b257588b80636584105387ba212b623a89972b0a362d62c0504 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.155224248Z" level=warning msg="cleaning up after shim disconnected" id=a334b1535e2f7b257588b80636584105387ba212b623a89972b0a362d62c0504 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.155238049Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1438]: time="2024-08-07T17:55:43.155389654Z" level=info msg="ignoring event" container=f907706c00ebe4149920447890732b438dd707eb66023282c7b66ac64e2185b5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:55:43 functional-100700 dockerd[1438]: time="2024-08-07T17:55:43.155569360Z" level=info msg="ignoring event" container=9f7b90986285c55035e0f34c86f5eaccab75a819c487d2fb0ea04853921c13bf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:55:43 functional-100700 dockerd[1438]: time="2024-08-07T17:55:43.155886271Z" level=info msg="ignoring event" container=a334b1535e2f7b257588b80636584105387ba212b623a89972b0a362d62c0504 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.169713847Z" level=info msg="shim disconnected" id=32e3bea2a931545b3b3164d713e35af3f8439f208d9a2909552dc971a61ca84a namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.170314567Z" level=warning msg="cleaning up after shim disconnected" id=32e3bea2a931545b3b3164d713e35af3f8439f208d9a2909552dc971a61ca84a namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.170672780Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1438]: time="2024-08-07T17:55:43.183709228Z" level=info msg="ignoring event" container=32e3bea2a931545b3b3164d713e35af3f8439f208d9a2909552dc971a61ca84a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:55:43 functional-100700 dockerd[1438]: time="2024-08-07T17:55:43.186931639Z" level=info msg="ignoring event" container=8e6d65d222ddafeb197a6bdeeb312ce4cb757e9ed4da126acc744cc3b9f1550e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.187130746Z" level=info msg="shim disconnected" id=8e6d65d222ddafeb197a6bdeeb312ce4cb757e9ed4da126acc744cc3b9f1550e namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.187455057Z" level=warning msg="cleaning up after shim disconnected" id=8e6d65d222ddafeb197a6bdeeb312ce4cb757e9ed4da126acc744cc3b9f1550e namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.187626563Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.203552411Z" level=info msg="shim disconnected" id=6f09e3713754243813d9c0717cdb77e8bedfc4c511d31490f7ba369a8d1bcc06 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.204052129Z" level=warning msg="cleaning up after shim disconnected" id=6f09e3713754243813d9c0717cdb77e8bedfc4c511d31490f7ba369a8d1bcc06 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.204312938Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1438]: time="2024-08-07T17:55:43.210829062Z" level=info msg="ignoring event" container=88ef6e03a7d49636d4ec885944dcc92b4c233c0740da2e8f511ce9078e817eb9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:55:43 functional-100700 dockerd[1438]: time="2024-08-07T17:55:43.210944966Z" level=info msg="ignoring event" container=6f09e3713754243813d9c0717cdb77e8bedfc4c511d31490f7ba369a8d1bcc06 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.210804861Z" level=info msg="shim disconnected" id=88ef6e03a7d49636d4ec885944dcc92b4c233c0740da2e8f511ce9078e817eb9 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.211582688Z" level=warning msg="cleaning up after shim disconnected" id=88ef6e03a7d49636d4ec885944dcc92b4c233c0740da2e8f511ce9078e817eb9 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.211710392Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.243702093Z" level=info msg="shim disconnected" id=4f7e1db775dc208f8832a04d1b684f4ab8f803d4f68869549effeeb024f11b10 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1438]: time="2024-08-07T17:55:43.244412118Z" level=info msg="ignoring event" container=4f7e1db775dc208f8832a04d1b684f4ab8f803d4f68869549effeeb024f11b10 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.248240650Z" level=warning msg="cleaning up after shim disconnected" id=4f7e1db775dc208f8832a04d1b684f4ab8f803d4f68869549effeeb024f11b10 namespace=moby
	Aug 07 17:55:43 functional-100700 dockerd[1444]: time="2024-08-07T17:55:43.248409455Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:47 functional-100700 dockerd[1438]: time="2024-08-07T17:55:47.969145341Z" level=info msg="ignoring event" container=8257548df8d0db4a3326d585a67cf2b31ec7c8b63a2fb41d0009d9c5fa124c3b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:55:47 functional-100700 dockerd[1444]: time="2024-08-07T17:55:47.970761397Z" level=info msg="shim disconnected" id=8257548df8d0db4a3326d585a67cf2b31ec7c8b63a2fb41d0009d9c5fa124c3b namespace=moby
	Aug 07 17:55:47 functional-100700 dockerd[1444]: time="2024-08-07T17:55:47.971219213Z" level=warning msg="cleaning up after shim disconnected" id=8257548df8d0db4a3326d585a67cf2b31ec7c8b63a2fb41d0009d9c5fa124c3b namespace=moby
	Aug 07 17:55:47 functional-100700 dockerd[1444]: time="2024-08-07T17:55:47.973093477Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:52 functional-100700 dockerd[1438]: time="2024-08-07T17:55:52.961783600Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=76120dfe1c32eb89c2bb01fc16ebaad087df9e517d03ed5169fb50db579cfe65
	Aug 07 17:55:53 functional-100700 dockerd[1444]: time="2024-08-07T17:55:53.004669561Z" level=info msg="shim disconnected" id=76120dfe1c32eb89c2bb01fc16ebaad087df9e517d03ed5169fb50db579cfe65 namespace=moby
	Aug 07 17:55:53 functional-100700 dockerd[1444]: time="2024-08-07T17:55:53.004855260Z" level=warning msg="cleaning up after shim disconnected" id=76120dfe1c32eb89c2bb01fc16ebaad087df9e517d03ed5169fb50db579cfe65 namespace=moby
	Aug 07 17:55:53 functional-100700 dockerd[1444]: time="2024-08-07T17:55:53.004935360Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:55:53 functional-100700 dockerd[1438]: time="2024-08-07T17:55:53.005290559Z" level=info msg="ignoring event" container=76120dfe1c32eb89c2bb01fc16ebaad087df9e517d03ed5169fb50db579cfe65 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:55:53 functional-100700 dockerd[1438]: time="2024-08-07T17:55:53.082366606Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 07 17:55:53 functional-100700 dockerd[1438]: time="2024-08-07T17:55:53.082484305Z" level=info msg="Daemon shutdown complete"
	Aug 07 17:55:53 functional-100700 dockerd[1438]: time="2024-08-07T17:55:53.082557905Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 07 17:55:53 functional-100700 dockerd[1438]: time="2024-08-07T17:55:53.082900804Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 07 17:55:54 functional-100700 systemd[1]: docker.service: Deactivated successfully.
	Aug 07 17:55:54 functional-100700 systemd[1]: Stopped Docker Application Container Engine.
	Aug 07 17:55:54 functional-100700 systemd[1]: docker.service: Consumed 6.104s CPU time.
	Aug 07 17:55:54 functional-100700 systemd[1]: Starting Docker Application Container Engine...
	Aug 07 17:55:54 functional-100700 dockerd[4430]: time="2024-08-07T17:55:54.151963549Z" level=info msg="Starting up"
	Aug 07 17:55:54 functional-100700 dockerd[4430]: time="2024-08-07T17:55:54.153307848Z" level=info msg="containerd not running, starting managed containerd"
	Aug 07 17:55:54 functional-100700 dockerd[4430]: time="2024-08-07T17:55:54.154476946Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=4437
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.189116214Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.217487588Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.217672488Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.217724888Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.217743588Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.217923788Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.218099587Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.218341687Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.218462487Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.218487087Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.218500687Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.218536087Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.218676087Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.221803184Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.221937684Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.222170584Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.222318784Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.222351783Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.222377583Z" level=info msg="metadata content store policy set" policy=shared
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.222707583Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.222769383Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.222791383Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.222824883Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.222843583Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.223037183Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.223447282Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.223700782Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224128482Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224153982Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224211182Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224225882Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224249282Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224283382Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224302582Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224321982Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224336982Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224348882Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224375782Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224415382Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224431982Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224446182Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224464282Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224487782Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224502981Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224516181Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224556381Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224572481Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224585181Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224597481Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224611681Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224628781Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224653481Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224668081Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.224681281Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.225080281Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.225134081Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.225150081Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.225165281Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.225176081Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.225190081Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.225200981Z" level=info msg="NRI interface is disabled by configuration."
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.225570080Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.225664580Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.225760880Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 07 17:55:54 functional-100700 dockerd[4437]: time="2024-08-07T17:55:54.225784780Z" level=info msg="containerd successfully booted in 0.038263s"
	Aug 07 17:55:55 functional-100700 dockerd[4430]: time="2024-08-07T17:55:55.203623721Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 07 17:55:55 functional-100700 dockerd[4430]: time="2024-08-07T17:55:55.249075279Z" level=info msg="Loading containers: start."
	Aug 07 17:55:55 functional-100700 dockerd[4430]: time="2024-08-07T17:55:55.486283283Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 07 17:55:55 functional-100700 dockerd[4430]: time="2024-08-07T17:55:55.611181043Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 07 17:55:55 functional-100700 dockerd[4430]: time="2024-08-07T17:55:55.728226393Z" level=info msg="Loading containers: done."
	Aug 07 17:55:55 functional-100700 dockerd[4430]: time="2024-08-07T17:55:55.754038026Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Aug 07 17:55:55 functional-100700 dockerd[4430]: time="2024-08-07T17:55:55.754150926Z" level=info msg="Daemon has completed initialization"
	Aug 07 17:55:55 functional-100700 dockerd[4430]: time="2024-08-07T17:55:55.805054292Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 07 17:55:55 functional-100700 systemd[1]: Started Docker Application Container Engine.
	Aug 07 17:55:55 functional-100700 dockerd[4430]: time="2024-08-07T17:55:55.817756408Z" level=info msg="API listen on [::]:2376"
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.526494067Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.526577168Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.526591169Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.526693470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.558712297Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.563728963Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.563952066Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.564587075Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.615923059Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.616533767Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.616742570Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.616989273Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.649839111Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.650906025Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.651080528Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:02 functional-100700 dockerd[4437]: time="2024-08-07T17:56:02.651280130Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.002162008Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.002309810Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.002327411Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.002784217Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.146319020Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.146402021Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.146419521Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.146546323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.186224804Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.186289605Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.186312905Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.186429907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.293246071Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.293400074Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.293416674Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:03 functional-100700 dockerd[4437]: time="2024-08-07T17:56:03.293513875Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:08 functional-100700 dockerd[4437]: time="2024-08-07T17:56:08.342920003Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:08 functional-100700 dockerd[4437]: time="2024-08-07T17:56:08.345412453Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:08 functional-100700 dockerd[4437]: time="2024-08-07T17:56:08.345440953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:08 functional-100700 dockerd[4437]: time="2024-08-07T17:56:08.346309071Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:08 functional-100700 dockerd[4437]: time="2024-08-07T17:56:08.427619805Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:08 functional-100700 dockerd[4437]: time="2024-08-07T17:56:08.427935011Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:08 functional-100700 dockerd[4437]: time="2024-08-07T17:56:08.427958412Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:08 functional-100700 dockerd[4437]: time="2024-08-07T17:56:08.428175716Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:08 functional-100700 dockerd[4437]: time="2024-08-07T17:56:08.450251060Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:08 functional-100700 dockerd[4437]: time="2024-08-07T17:56:08.450326762Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:08 functional-100700 dockerd[4437]: time="2024-08-07T17:56:08.450344662Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:08 functional-100700 dockerd[4437]: time="2024-08-07T17:56:08.450438364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:09 functional-100700 dockerd[4437]: time="2024-08-07T17:56:09.021378960Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:09 functional-100700 dockerd[4437]: time="2024-08-07T17:56:09.021447242Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:09 functional-100700 dockerd[4437]: time="2024-08-07T17:56:09.021467036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:09 functional-100700 dockerd[4437]: time="2024-08-07T17:56:09.021664985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:09 functional-100700 dockerd[4437]: time="2024-08-07T17:56:09.032269201Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:09 functional-100700 dockerd[4437]: time="2024-08-07T17:56:09.032481345Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:09 functional-100700 dockerd[4437]: time="2024-08-07T17:56:09.033742514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:09 functional-100700 dockerd[4437]: time="2024-08-07T17:56:09.034300967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:09 functional-100700 dockerd[4437]: time="2024-08-07T17:56:09.230710505Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 17:56:09 functional-100700 dockerd[4437]: time="2024-08-07T17:56:09.231303050Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 17:56:09 functional-100700 dockerd[4437]: time="2024-08-07T17:56:09.231404523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:56:09 functional-100700 dockerd[4437]: time="2024-08-07T17:56:09.231887696Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 17:59:53 functional-100700 systemd[1]: Stopping Docker Application Container Engine...
	Aug 07 17:59:53 functional-100700 dockerd[4430]: time="2024-08-07T17:59:53.701240101Z" level=info msg="Processing signal 'terminated'"
	Aug 07 17:59:53 functional-100700 dockerd[4430]: time="2024-08-07T17:59:53.891480424Z" level=info msg="ignoring event" container=011ec8239aa1c51c9f4776f7bfc897d8a3292c8e36986f970b1e2c2ca9da86fd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:59:53 functional-100700 dockerd[4430]: time="2024-08-07T17:59:53.891549927Z" level=info msg="ignoring event" container=b39dd511076447f842f8327dca684bf0a48d714c0f66e22029b88d889e068b15 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.892313355Z" level=info msg="shim disconnected" id=b39dd511076447f842f8327dca684bf0a48d714c0f66e22029b88d889e068b15 namespace=moby
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.892402158Z" level=warning msg="cleaning up after shim disconnected" id=b39dd511076447f842f8327dca684bf0a48d714c0f66e22029b88d889e068b15 namespace=moby
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.892417259Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.892816273Z" level=info msg="shim disconnected" id=011ec8239aa1c51c9f4776f7bfc897d8a3292c8e36986f970b1e2c2ca9da86fd namespace=moby
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.893079383Z" level=warning msg="cleaning up after shim disconnected" id=011ec8239aa1c51c9f4776f7bfc897d8a3292c8e36986f970b1e2c2ca9da86fd namespace=moby
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.893229989Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:59:53 functional-100700 dockerd[4430]: time="2024-08-07T17:59:53.943367240Z" level=info msg="ignoring event" container=333ea0f6bde6b1ef97dfdfac80a9d448a8898827c7df556176cd4ea6cc523a33 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.943759054Z" level=info msg="shim disconnected" id=333ea0f6bde6b1ef97dfdfac80a9d448a8898827c7df556176cd4ea6cc523a33 namespace=moby
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.943822956Z" level=warning msg="cleaning up after shim disconnected" id=333ea0f6bde6b1ef97dfdfac80a9d448a8898827c7df556176cd4ea6cc523a33 namespace=moby
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.943835757Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.963273574Z" level=info msg="shim disconnected" id=ee73743ff4167ab913696e97e9a77ab9a2b23dba89c1348473bb28a7089d45c2 namespace=moby
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.963426780Z" level=warning msg="cleaning up after shim disconnected" id=ee73743ff4167ab913696e97e9a77ab9a2b23dba89c1348473bb28a7089d45c2 namespace=moby
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.963795094Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:59:53 functional-100700 dockerd[4430]: time="2024-08-07T17:59:53.980683817Z" level=info msg="ignoring event" container=ee73743ff4167ab913696e97e9a77ab9a2b23dba89c1348473bb28a7089d45c2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:59:53 functional-100700 dockerd[4430]: time="2024-08-07T17:59:53.981327041Z" level=info msg="ignoring event" container=d57a72e940a3aff58bc3f9442b486233311d0f1b0b85603bed6541d03b52640b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:59:53 functional-100700 dockerd[4430]: time="2024-08-07T17:59:53.981517248Z" level=info msg="ignoring event" container=ff33a3021f789dbad1bd68bc673700c1dc540ec4a83d74c98c4915f4c66932e7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.983088406Z" level=info msg="shim disconnected" id=d57a72e940a3aff58bc3f9442b486233311d0f1b0b85603bed6541d03b52640b namespace=moby
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.983163809Z" level=warning msg="cleaning up after shim disconnected" id=d57a72e940a3aff58bc3f9442b486233311d0f1b0b85603bed6541d03b52640b namespace=moby
	Aug 07 17:59:53 functional-100700 dockerd[4437]: time="2024-08-07T17:59:53.983176709Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4430]: time="2024-08-07T17:59:54.002058106Z" level=info msg="ignoring event" container=60d38309b3f444639ea52ae1b7c219816eaed13bbb7eb6ef22b5ecb5ee8fc804 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.002564025Z" level=info msg="shim disconnected" id=60d38309b3f444639ea52ae1b7c219816eaed13bbb7eb6ef22b5ecb5ee8fc804 namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.003062843Z" level=warning msg="cleaning up after shim disconnected" id=60d38309b3f444639ea52ae1b7c219816eaed13bbb7eb6ef22b5ecb5ee8fc804 namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.009295273Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.013796640Z" level=info msg="shim disconnected" id=623399e23aa3ba701f8a621854544b48498da36bba773f8911042b6d2561b57c namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.019740659Z" level=warning msg="cleaning up after shim disconnected" id=623399e23aa3ba701f8a621854544b48498da36bba773f8911042b6d2561b57c namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.019785161Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.008834456Z" level=info msg="shim disconnected" id=ff33a3021f789dbad1bd68bc673700c1dc540ec4a83d74c98c4915f4c66932e7 namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.021492824Z" level=warning msg="cleaning up after shim disconnected" id=ff33a3021f789dbad1bd68bc673700c1dc540ec4a83d74c98c4915f4c66932e7 namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.021550026Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4430]: time="2024-08-07T17:59:54.031232683Z" level=info msg="ignoring event" container=a781cd4bdb89586cb36144096a96bd794ac5f7417bddb500925aa03f7a83ee76 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:59:54 functional-100700 dockerd[4430]: time="2024-08-07T17:59:54.031289685Z" level=info msg="ignoring event" container=241a3e17f71d6549f6693cc1faa13d1a82113d0126d862a38ff582ea671e7704 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:59:54 functional-100700 dockerd[4430]: time="2024-08-07T17:59:54.031323187Z" level=info msg="ignoring event" container=0a22b12bc48412adcefcda1aa5f0176acba34d5ef617a5fc6e81727d4bf2fb9f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:59:54 functional-100700 dockerd[4430]: time="2024-08-07T17:59:54.031338987Z" level=info msg="ignoring event" container=125a3650c7bb9ceb749c5379c1700ef34f0671035364a39c7f8a55a9e244527f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:59:54 functional-100700 dockerd[4430]: time="2024-08-07T17:59:54.031357688Z" level=info msg="ignoring event" container=623399e23aa3ba701f8a621854544b48498da36bba773f8911042b6d2561b57c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.016580642Z" level=info msg="shim disconnected" id=a781cd4bdb89586cb36144096a96bd794ac5f7417bddb500925aa03f7a83ee76 namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.033182755Z" level=info msg="shim disconnected" id=125a3650c7bb9ceb749c5379c1700ef34f0671035364a39c7f8a55a9e244527f namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.035991259Z" level=warning msg="cleaning up after shim disconnected" id=125a3650c7bb9ceb749c5379c1700ef34f0671035364a39c7f8a55a9e244527f namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.036075162Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.036361773Z" level=warning msg="cleaning up after shim disconnected" id=a781cd4bdb89586cb36144096a96bd794ac5f7417bddb500925aa03f7a83ee76 namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.036396774Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.015009784Z" level=info msg="shim disconnected" id=241a3e17f71d6549f6693cc1faa13d1a82113d0126d862a38ff582ea671e7704 namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.040268717Z" level=warning msg="cleaning up after shim disconnected" id=241a3e17f71d6549f6693cc1faa13d1a82113d0126d862a38ff582ea671e7704 namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.040483025Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.017838089Z" level=info msg="shim disconnected" id=0a22b12bc48412adcefcda1aa5f0176acba34d5ef617a5fc6e81727d4bf2fb9f namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.056017998Z" level=warning msg="cleaning up after shim disconnected" id=0a22b12bc48412adcefcda1aa5f0176acba34d5ef617a5fc6e81727d4bf2fb9f namespace=moby
	Aug 07 17:59:54 functional-100700 dockerd[4437]: time="2024-08-07T17:59:54.056073300Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 17:59:58 functional-100700 dockerd[4430]: time="2024-08-07T17:59:58.843549639Z" level=info msg="ignoring event" container=3bc20896cf9b1f1d0380b658e8e185cf6d0bedfce8143c9b50eb86a711c71a45 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 17:59:58 functional-100700 dockerd[4437]: time="2024-08-07T17:59:58.844293967Z" level=info msg="shim disconnected" id=3bc20896cf9b1f1d0380b658e8e185cf6d0bedfce8143c9b50eb86a711c71a45 namespace=moby
	Aug 07 17:59:58 functional-100700 dockerd[4437]: time="2024-08-07T17:59:58.844701482Z" level=warning msg="cleaning up after shim disconnected" id=3bc20896cf9b1f1d0380b658e8e185cf6d0bedfce8143c9b50eb86a711c71a45 namespace=moby
	Aug 07 17:59:58 functional-100700 dockerd[4437]: time="2024-08-07T17:59:58.845283503Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 18:00:03 functional-100700 dockerd[4430]: time="2024-08-07T18:00:03.891107278Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=ceb9a86ed09ccf71c371818d759e450667e70979c75506a5233093a4e3efe077
	Aug 07 18:00:03 functional-100700 dockerd[4430]: time="2024-08-07T18:00:03.954477534Z" level=info msg="ignoring event" container=ceb9a86ed09ccf71c371818d759e450667e70979c75506a5233093a4e3efe077 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 18:00:03 functional-100700 dockerd[4437]: time="2024-08-07T18:00:03.957207011Z" level=info msg="shim disconnected" id=ceb9a86ed09ccf71c371818d759e450667e70979c75506a5233093a4e3efe077 namespace=moby
	Aug 07 18:00:03 functional-100700 dockerd[4437]: time="2024-08-07T18:00:03.957346010Z" level=warning msg="cleaning up after shim disconnected" id=ceb9a86ed09ccf71c371818d759e450667e70979c75506a5233093a4e3efe077 namespace=moby
	Aug 07 18:00:03 functional-100700 dockerd[4437]: time="2024-08-07T18:00:03.957506408Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 18:00:04 functional-100700 dockerd[4430]: time="2024-08-07T18:00:04.021732016Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 07 18:00:04 functional-100700 dockerd[4430]: time="2024-08-07T18:00:04.022302513Z" level=info msg="Daemon shutdown complete"
	Aug 07 18:00:04 functional-100700 dockerd[4430]: time="2024-08-07T18:00:04.022522311Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 07 18:00:04 functional-100700 dockerd[4430]: time="2024-08-07T18:00:04.022549211Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 07 18:00:05 functional-100700 systemd[1]: docker.service: Deactivated successfully.
	Aug 07 18:00:05 functional-100700 systemd[1]: Stopped Docker Application Container Engine.
	Aug 07 18:00:05 functional-100700 systemd[1]: docker.service: Consumed 8.144s CPU time.
	Aug 07 18:00:05 functional-100700 systemd[1]: Starting Docker Application Container Engine...
	Aug 07 18:00:05 functional-100700 dockerd[8031]: time="2024-08-07T18:00:05.087273572Z" level=info msg="Starting up"
	Aug 07 18:01:05 functional-100700 dockerd[8031]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 07 18:01:05 functional-100700 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 07 18:01:05 functional-100700 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 07 18:01:05 functional-100700 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0807 18:01:05.216074    2092 out.go:239] * 
	W0807 18:01:05.217856    2092 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0807 18:01:05.222151    2092 out.go:177] 
	
	
	==> Docker <==
	Aug 07 18:16:08 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:16:08Z" level=error msg="error getting RW layer size for container ID '623399e23aa3ba701f8a621854544b48498da36bba773f8911042b6d2561b57c': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/623399e23aa3ba701f8a621854544b48498da36bba773f8911042b6d2561b57c/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:16:08 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:16:08Z" level=error msg="Set backoffDuration to : 1m0s for container ID '623399e23aa3ba701f8a621854544b48498da36bba773f8911042b6d2561b57c'"
	Aug 07 18:16:08 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:16:08Z" level=error msg="error getting RW layer size for container ID '60d38309b3f444639ea52ae1b7c219816eaed13bbb7eb6ef22b5ecb5ee8fc804': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/60d38309b3f444639ea52ae1b7c219816eaed13bbb7eb6ef22b5ecb5ee8fc804/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:16:08 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:16:08Z" level=error msg="Set backoffDuration to : 1m0s for container ID '60d38309b3f444639ea52ae1b7c219816eaed13bbb7eb6ef22b5ecb5ee8fc804'"
	Aug 07 18:16:08 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:16:08Z" level=error msg="error getting RW layer size for container ID '88ef6e03a7d49636d4ec885944dcc92b4c233c0740da2e8f511ce9078e817eb9': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/88ef6e03a7d49636d4ec885944dcc92b4c233c0740da2e8f511ce9078e817eb9/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:16:08 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:16:08Z" level=error msg="Set backoffDuration to : 1m0s for container ID '88ef6e03a7d49636d4ec885944dcc92b4c233c0740da2e8f511ce9078e817eb9'"
	Aug 07 18:16:08 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:16:08Z" level=error msg="error getting RW layer size for container ID 'a781cd4bdb89586cb36144096a96bd794ac5f7417bddb500925aa03f7a83ee76': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/a781cd4bdb89586cb36144096a96bd794ac5f7417bddb500925aa03f7a83ee76/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:16:08 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:16:08Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'a781cd4bdb89586cb36144096a96bd794ac5f7417bddb500925aa03f7a83ee76'"
	Aug 07 18:16:08 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:16:08Z" level=error msg="error getting RW layer size for container ID '8e6d65d222ddafeb197a6bdeeb312ce4cb757e9ed4da126acc744cc3b9f1550e': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/8e6d65d222ddafeb197a6bdeeb312ce4cb757e9ed4da126acc744cc3b9f1550e/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:16:08 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:16:08Z" level=error msg="Set backoffDuration to : 1m0s for container ID '8e6d65d222ddafeb197a6bdeeb312ce4cb757e9ed4da126acc744cc3b9f1550e'"
	Aug 07 18:16:08 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:16:08Z" level=error msg="error getting RW layer size for container ID '1ca5873cb027b5d4205ca4cdff1109b4f47c0b8c5f18ae986a0b932c8d9584f4': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/1ca5873cb027b5d4205ca4cdff1109b4f47c0b8c5f18ae986a0b932c8d9584f4/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:16:08 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:16:08Z" level=error msg="Set backoffDuration to : 1m0s for container ID '1ca5873cb027b5d4205ca4cdff1109b4f47c0b8c5f18ae986a0b932c8d9584f4'"
	Aug 07 18:16:08 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:16:08Z" level=error msg="error getting RW layer size for container ID 'ceb9a86ed09ccf71c371818d759e450667e70979c75506a5233093a4e3efe077': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/ceb9a86ed09ccf71c371818d759e450667e70979c75506a5233093a4e3efe077/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:16:08 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:16:08Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'ceb9a86ed09ccf71c371818d759e450667e70979c75506a5233093a4e3efe077'"
	Aug 07 18:16:08 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:16:08Z" level=error msg="error getting RW layer size for container ID '03079679d68cc19538e02f39c48251b50393f7dd981e609af3dcc96c420cd29e': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/03079679d68cc19538e02f39c48251b50393f7dd981e609af3dcc96c420cd29e/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:16:08 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:16:08Z" level=error msg="Set backoffDuration to : 1m0s for container ID '03079679d68cc19538e02f39c48251b50393f7dd981e609af3dcc96c420cd29e'"
	Aug 07 18:16:08 functional-100700 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 07 18:16:08 functional-100700 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 07 18:16:08 functional-100700 systemd[1]: Failed to start Docker Application Container Engine.
	Aug 07 18:16:08 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:16:08Z" level=error msg="error getting RW layer size for container ID 'd57a72e940a3aff58bc3f9442b486233311d0f1b0b85603bed6541d03b52640b': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/d57a72e940a3aff58bc3f9442b486233311d0f1b0b85603bed6541d03b52640b/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:16:08 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:16:08Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'd57a72e940a3aff58bc3f9442b486233311d0f1b0b85603bed6541d03b52640b'"
	Aug 07 18:16:08 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:16:08Z" level=error msg="error getting RW layer size for container ID '3bc20896cf9b1f1d0380b658e8e185cf6d0bedfce8143c9b50eb86a711c71a45': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/3bc20896cf9b1f1d0380b658e8e185cf6d0bedfce8143c9b50eb86a711c71a45/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:16:08 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:16:08Z" level=error msg="Set backoffDuration to : 1m0s for container ID '3bc20896cf9b1f1d0380b658e8e185cf6d0bedfce8143c9b50eb86a711c71a45'"
	Aug 07 18:16:08 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:16:08Z" level=error msg="error getting RW layer size for container ID '9f7b90986285c55035e0f34c86f5eaccab75a819c487d2fb0ea04853921c13bf': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/9f7b90986285c55035e0f34c86f5eaccab75a819c487d2fb0ea04853921c13bf/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:16:08 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:16:08Z" level=error msg="Set backoffDuration to : 1m0s for container ID '9f7b90986285c55035e0f34c86f5eaccab75a819c487d2fb0ea04853921c13bf'"
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-08-07T18:16:10Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.649998] systemd-fstab-generator[3985]: Ignoring "noauto" option for root device
	[  +0.264046] systemd-fstab-generator[3997]: Ignoring "noauto" option for root device
	[  +0.315437] systemd-fstab-generator[4011]: Ignoring "noauto" option for root device
	[  +5.331377] kauditd_printk_skb: 89 callbacks suppressed
	[  +8.070413] systemd-fstab-generator[4649]: Ignoring "noauto" option for root device
	[  +0.207492] systemd-fstab-generator[4660]: Ignoring "noauto" option for root device
	[  +0.210288] systemd-fstab-generator[4672]: Ignoring "noauto" option for root device
	[  +0.283847] systemd-fstab-generator[4687]: Ignoring "noauto" option for root device
	[  +0.989188] systemd-fstab-generator[4863]: Ignoring "noauto" option for root device
	[Aug 7 17:56] systemd-fstab-generator[4990]: Ignoring "noauto" option for root device
	[  +0.108824] kauditd_printk_skb: 137 callbacks suppressed
	[  +6.503684] kauditd_printk_skb: 52 callbacks suppressed
	[ +11.988991] systemd-fstab-generator[5884]: Ignoring "noauto" option for root device
	[  +0.145733] kauditd_printk_skb: 31 callbacks suppressed
	[Aug 7 17:59] systemd-fstab-generator[7536]: Ignoring "noauto" option for root device
	[  +0.151220] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.513441] systemd-fstab-generator[7585]: Ignoring "noauto" option for root device
	[  +0.282699] systemd-fstab-generator[7598]: Ignoring "noauto" option for root device
	[  +0.340219] systemd-fstab-generator[7612]: Ignoring "noauto" option for root device
	[  +5.344642] kauditd_printk_skb: 89 callbacks suppressed
	[Aug 7 18:12] systemd-fstab-generator[11104]: Ignoring "noauto" option for root device
	[Aug 7 18:13] systemd-fstab-generator[11514]: Ignoring "noauto" option for root device
	[  +0.128359] kauditd_printk_skb: 12 callbacks suppressed
	[Aug 7 18:17] systemd-fstab-generator[12657]: Ignoring "noauto" option for root device
	[  +0.123972] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 18:17:09 up 25 min,  0 users,  load average: 0.00, 0.01, 0.06
	Linux functional-100700 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Aug 07 18:17:04 functional-100700 kubelet[4998]: E0807 18:17:04.219596    4998 kubelet.go:2370] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 17m10.96254565s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	Aug 07 18:17:04 functional-100700 kubelet[4998]: E0807 18:17:04.924359    4998 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-100700?timeout=10s\": dial tcp 172.28.235.211:8441: connect: connection refused" interval="7s"
	Aug 07 18:17:05 functional-100700 kubelet[4998]: E0807 18:17:05.428192    4998 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-100700\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-100700?resourceVersion=0&timeout=10s\": dial tcp 172.28.235.211:8441: connect: connection refused"
	Aug 07 18:17:05 functional-100700 kubelet[4998]: E0807 18:17:05.429028    4998 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-100700\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-100700?timeout=10s\": dial tcp 172.28.235.211:8441: connect: connection refused"
	Aug 07 18:17:05 functional-100700 kubelet[4998]: E0807 18:17:05.429898    4998 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-100700\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-100700?timeout=10s\": dial tcp 172.28.235.211:8441: connect: connection refused"
	Aug 07 18:17:05 functional-100700 kubelet[4998]: E0807 18:17:05.431224    4998 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-100700\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-100700?timeout=10s\": dial tcp 172.28.235.211:8441: connect: connection refused"
	Aug 07 18:17:05 functional-100700 kubelet[4998]: E0807 18:17:05.432211    4998 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-100700\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-100700?timeout=10s\": dial tcp 172.28.235.211:8441: connect: connection refused"
	Aug 07 18:17:05 functional-100700 kubelet[4998]: E0807 18:17:05.432314    4998 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	Aug 07 18:17:07 functional-100700 kubelet[4998]: E0807 18:17:07.241531    4998 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events\": dial tcp 172.28.235.211:8441: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-functional-100700.17e9841fb36ee3f5  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-functional-100700,UID:00b3db9060a30b06edb713820a5caeb5,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://172.28.235.211:8441/readyz\": dial tcp 172.28.235.211:8441: connect: connection refused,Source:EventSource{Component:kubelet,Host:functional-100700,},FirstTimestamp:2024-08-07 18:00:04.135166965 +0000 UTC m=+242.626871687,LastTimestamp:2024-08-07 18:00:04.135166965 +0000 UTC m=+242
.626871687,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-100700,}"
	Aug 07 18:17:08 functional-100700 kubelet[4998]: E0807 18:17:08.902119    4998 remote_image.go:232] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:17:08 functional-100700 kubelet[4998]: E0807 18:17:08.904535    4998 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:17:08 functional-100700 kubelet[4998]: E0807 18:17:08.904036    4998 remote_runtime.go:294] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Aug 07 18:17:08 functional-100700 kubelet[4998]: E0807 18:17:08.904603    4998 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:17:08 functional-100700 kubelet[4998]: E0807 18:17:08.904626    4998 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:17:08 functional-100700 kubelet[4998]: E0807 18:17:08.904461    4998 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Aug 07 18:17:08 functional-100700 kubelet[4998]: E0807 18:17:08.904650    4998 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:17:08 functional-100700 kubelet[4998]: I0807 18:17:08.904667    4998 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:17:08 functional-100700 kubelet[4998]: E0807 18:17:08.903654    4998 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Aug 07 18:17:08 functional-100700 kubelet[4998]: E0807 18:17:08.904688    4998 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:17:08 functional-100700 kubelet[4998]: E0807 18:17:08.904417    4998 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Aug 07 18:17:08 functional-100700 kubelet[4998]: E0807 18:17:08.909713    4998 container_log_manager.go:194] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:17:08 functional-100700 kubelet[4998]: E0807 18:17:08.909275    4998 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Aug 07 18:17:08 functional-100700 kubelet[4998]: E0807 18:17:08.909811    4998 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Aug 07 18:17:08 functional-100700 kubelet[4998]: E0807 18:17:08.910355    4998 kubelet.go:1436] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	Aug 07 18:17:09 functional-100700 kubelet[4998]: E0807 18:17:09.220477    4998 kubelet.go:2370] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 17m15.963335495s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0807 18:12:43.167088    9032 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0807 18:13:08.107048    9032 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0807 18:13:08.146225    9032 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0807 18:13:08.191046    9032 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0807 18:14:08.351614    9032 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0807 18:15:08.446854    9032 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0807 18:15:08.515505    9032 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0807 18:16:08.658835    9032 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0807 18:16:08.705312    9032 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-100700 -n functional-100700
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-100700 -n functional-100700: exit status 2 (12.4151417s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0807 18:17:09.812314   13520 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-100700" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/MySQL (296.61s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (179.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-100700 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:218: (dbg) Non-zero exit: kubectl --context functional-100700 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (2.2147189s)

                                                
                                                
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	Unable to connect to the server: dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:220: failed to 'kubectl get nodes' with args "kubectl --context functional-100700 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:226: expected to have label "minikube.k8s.io/commit" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	Unable to connect to the server: dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/version" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	Unable to connect to the server: dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	Unable to connect to the server: dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/name" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	Unable to connect to the server: dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/primary" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	Unable to connect to the server: dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-100700 -n functional-100700
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-100700 -n functional-100700: exit status 2 (13.2865944s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0807 18:23:27.051219    8400 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/parallel/NodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/NodeLabels]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-100700 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-100700 logs -n 25: (2m31.3119937s)
helpers_test.go:252: TestFunctional/parallel/NodeLabels logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                                Args                                                 |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| ssh     | functional-100700 ssh sudo cat                                                                      | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:12 UTC | 07 Aug 24 18:12 UTC |
	|         | /etc/ssl/certs/96602.pem                                                                            |                   |                   |         |                     |                     |
	| cp      | functional-100700 cp functional-100700:/home/docker/cp-test.txt                                     | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:12 UTC | 07 Aug 24 18:13 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalparallelCpCmd1063066128\001\cp-test.txt |                   |                   |         |                     |                     |
	| ssh     | functional-100700 ssh sudo cat                                                                      | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:12 UTC | 07 Aug 24 18:13 UTC |
	|         | /usr/share/ca-certificates/96602.pem                                                                |                   |                   |         |                     |                     |
	| ssh     | functional-100700 ssh -n                                                                            | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:13 UTC | 07 Aug 24 18:13 UTC |
	|         | functional-100700 sudo cat                                                                          |                   |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                            |                   |                   |         |                     |                     |
	| ssh     | functional-100700 ssh sudo cat                                                                      | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:13 UTC | 07 Aug 24 18:13 UTC |
	|         | /etc/ssl/certs/3ec20f2e.0                                                                           |                   |                   |         |                     |                     |
	| cp      | functional-100700 cp                                                                                | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:13 UTC | 07 Aug 24 18:13 UTC |
	|         | testdata\cp-test.txt                                                                                |                   |                   |         |                     |                     |
	|         | /tmp/does/not/exist/cp-test.txt                                                                     |                   |                   |         |                     |                     |
	| ssh     | functional-100700 ssh -n                                                                            | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:13 UTC | 07 Aug 24 18:13 UTC |
	|         | functional-100700 sudo cat                                                                          |                   |                   |         |                     |                     |
	|         | /tmp/does/not/exist/cp-test.txt                                                                     |                   |                   |         |                     |                     |
	| tunnel  | functional-100700 tunnel                                                                            | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:13 UTC |                     |
	|         | --alsologtostderr                                                                                   |                   |                   |         |                     |                     |
	| tunnel  | functional-100700 tunnel                                                                            | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:13 UTC |                     |
	|         | --alsologtostderr                                                                                   |                   |                   |         |                     |                     |
	| tunnel  | functional-100700 tunnel                                                                            | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:13 UTC |                     |
	|         | --alsologtostderr                                                                                   |                   |                   |         |                     |                     |
	| addons  | functional-100700 addons list                                                                       | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:13 UTC | 07 Aug 24 18:13 UTC |
	| addons  | functional-100700 addons list                                                                       | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:13 UTC | 07 Aug 24 18:13 UTC |
	|         | -o json                                                                                             |                   |                   |         |                     |                     |
	| service | functional-100700 service list                                                                      | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:17 UTC |                     |
	| service | functional-100700 service list                                                                      | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:17 UTC |                     |
	|         | -o json                                                                                             |                   |                   |         |                     |                     |
	| service | functional-100700 service                                                                           | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:17 UTC |                     |
	|         | --namespace=default --https                                                                         |                   |                   |         |                     |                     |
	|         | --url hello-node                                                                                    |                   |                   |         |                     |                     |
	| service | functional-100700                                                                                   | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:17 UTC |                     |
	|         | service hello-node --url                                                                            |                   |                   |         |                     |                     |
	|         | --format={{.IP}}                                                                                    |                   |                   |         |                     |                     |
	| service | functional-100700 service                                                                           | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:17 UTC |                     |
	|         | hello-node --url                                                                                    |                   |                   |         |                     |                     |
	| image   | functional-100700 image load --daemon                                                               | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:18 UTC | 07 Aug 24 18:19 UTC |
	|         | docker.io/kicbase/echo-server:functional-100700                                                     |                   |                   |         |                     |                     |
	|         | --alsologtostderr                                                                                   |                   |                   |         |                     |                     |
	| image   | functional-100700 image ls                                                                          | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:19 UTC | 07 Aug 24 18:20 UTC |
	| image   | functional-100700 image load --daemon                                                               | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:20 UTC | 07 Aug 24 18:21 UTC |
	|         | docker.io/kicbase/echo-server:functional-100700                                                     |                   |                   |         |                     |                     |
	|         | --alsologtostderr                                                                                   |                   |                   |         |                     |                     |
	| image   | functional-100700 image ls                                                                          | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:21 UTC | 07 Aug 24 18:22 UTC |
	| image   | functional-100700 image load --daemon                                                               | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:22 UTC | 07 Aug 24 18:23 UTC |
	|         | docker.io/kicbase/echo-server:functional-100700                                                     |                   |                   |         |                     |                     |
	|         | --alsologtostderr                                                                                   |                   |                   |         |                     |                     |
	| image   | functional-100700 image ls                                                                          | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:23 UTC |                     |
	| ssh     | functional-100700 ssh sudo                                                                          | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:23 UTC |                     |
	|         | systemctl is-active crio                                                                            |                   |                   |         |                     |                     |
	| start   | -p functional-100700                                                                                | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:23 UTC |                     |
	|         | --dry-run --memory                                                                                  |                   |                   |         |                     |                     |
	|         | 250MB --alsologtostderr                                                                             |                   |                   |         |                     |                     |
	|         | --driver=hyperv                                                                                     |                   |                   |         |                     |                     |
	|---------|-----------------------------------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/07 18:23:34
	Running on machine: minikube6
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0807 18:23:34.900170    5948 out.go:291] Setting OutFile to fd 1068 ...
	I0807 18:23:34.900737    5948 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 18:23:34.900737    5948 out.go:304] Setting ErrFile to fd 1164...
	I0807 18:23:34.900737    5948 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 18:23:34.924601    5948 out.go:298] Setting JSON to false
	I0807 18:23:34.928346    5948 start.go:129] hostinfo: {"hostname":"minikube6","uptime":316944,"bootTime":1722738070,"procs":194,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4717 Build 19045.4717","kernelVersion":"10.0.19045.4717 Build 19045.4717","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0807 18:23:34.928346    5948 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0807 18:23:34.933462    5948 out.go:177] * [functional-100700] minikube v1.33.1 sur Microsoft Windows 10 Enterprise N 10.0.19045.4717 Build 19045.4717
	I0807 18:23:34.936447    5948 notify.go:220] Checking for updates...
	I0807 18:23:34.939390    5948 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0807 18:23:34.942404    5948 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0807 18:23:34.945106    5948 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0807 18:23:34.947653    5948 out.go:177]   - MINIKUBE_LOCATION=19389
	I0807 18:23:34.951081    5948 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	
	==> Docker <==
	Aug 07 18:25:10 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:25:10Z" level=error msg="Set backoffDuration to : 1m0s for container ID '60d38309b3f444639ea52ae1b7c219816eaed13bbb7eb6ef22b5ecb5ee8fc804'"
	Aug 07 18:25:10 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:25:10Z" level=error msg="error getting RW layer size for container ID '8257548df8d0db4a3326d585a67cf2b31ec7c8b63a2fb41d0009d9c5fa124c3b': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/8257548df8d0db4a3326d585a67cf2b31ec7c8b63a2fb41d0009d9c5fa124c3b/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:25:10 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:25:10Z" level=error msg="Set backoffDuration to : 1m0s for container ID '8257548df8d0db4a3326d585a67cf2b31ec7c8b63a2fb41d0009d9c5fa124c3b'"
	Aug 07 18:25:10 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:25:10Z" level=error msg="error getting RW layer size for container ID '333ea0f6bde6b1ef97dfdfac80a9d448a8898827c7df556176cd4ea6cc523a33': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/333ea0f6bde6b1ef97dfdfac80a9d448a8898827c7df556176cd4ea6cc523a33/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:25:10 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:25:10Z" level=error msg="Set backoffDuration to : 1m0s for container ID '333ea0f6bde6b1ef97dfdfac80a9d448a8898827c7df556176cd4ea6cc523a33'"
	Aug 07 18:25:10 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:25:10Z" level=error msg="error getting RW layer size for container ID '1ca5873cb027b5d4205ca4cdff1109b4f47c0b8c5f18ae986a0b932c8d9584f4': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/1ca5873cb027b5d4205ca4cdff1109b4f47c0b8c5f18ae986a0b932c8d9584f4/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:25:10 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:25:10Z" level=error msg="Set backoffDuration to : 1m0s for container ID '1ca5873cb027b5d4205ca4cdff1109b4f47c0b8c5f18ae986a0b932c8d9584f4'"
	Aug 07 18:25:10 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:25:10Z" level=error msg="error getting RW layer size for container ID '76120dfe1c32eb89c2bb01fc16ebaad087df9e517d03ed5169fb50db579cfe65': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/76120dfe1c32eb89c2bb01fc16ebaad087df9e517d03ed5169fb50db579cfe65/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:25:10 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:25:10Z" level=error msg="Set backoffDuration to : 1m0s for container ID '76120dfe1c32eb89c2bb01fc16ebaad087df9e517d03ed5169fb50db579cfe65'"
	Aug 07 18:25:10 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:25:10Z" level=error msg="error getting RW layer size for container ID '8e6d65d222ddafeb197a6bdeeb312ce4cb757e9ed4da126acc744cc3b9f1550e': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/8e6d65d222ddafeb197a6bdeeb312ce4cb757e9ed4da126acc744cc3b9f1550e/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:25:10 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:25:10Z" level=error msg="Set backoffDuration to : 1m0s for container ID '8e6d65d222ddafeb197a6bdeeb312ce4cb757e9ed4da126acc744cc3b9f1550e'"
	Aug 07 18:25:10 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:25:10Z" level=error msg="error getting RW layer size for container ID 'd57a72e940a3aff58bc3f9442b486233311d0f1b0b85603bed6541d03b52640b': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/d57a72e940a3aff58bc3f9442b486233311d0f1b0b85603bed6541d03b52640b/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:25:10 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:25:10Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'd57a72e940a3aff58bc3f9442b486233311d0f1b0b85603bed6541d03b52640b'"
	Aug 07 18:25:10 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:25:10Z" level=error msg="error getting RW layer size for container ID 'a781cd4bdb89586cb36144096a96bd794ac5f7417bddb500925aa03f7a83ee76': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/a781cd4bdb89586cb36144096a96bd794ac5f7417bddb500925aa03f7a83ee76/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:25:10 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:25:10Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'a781cd4bdb89586cb36144096a96bd794ac5f7417bddb500925aa03f7a83ee76'"
	Aug 07 18:25:10 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:25:10Z" level=error msg="error getting RW layer size for container ID '3bc20896cf9b1f1d0380b658e8e185cf6d0bedfce8143c9b50eb86a711c71a45': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/3bc20896cf9b1f1d0380b658e8e185cf6d0bedfce8143c9b50eb86a711c71a45/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:25:10 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:25:10Z" level=error msg="Set backoffDuration to : 1m0s for container ID '3bc20896cf9b1f1d0380b658e8e185cf6d0bedfce8143c9b50eb86a711c71a45'"
	Aug 07 18:25:10 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:25:10Z" level=error msg="error getting RW layer size for container ID '623399e23aa3ba701f8a621854544b48498da36bba773f8911042b6d2561b57c': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/623399e23aa3ba701f8a621854544b48498da36bba773f8911042b6d2561b57c/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:25:10 functional-100700 cri-dockerd[4699]: time="2024-08-07T18:25:10Z" level=error msg="Set backoffDuration to : 1m0s for container ID '623399e23aa3ba701f8a621854544b48498da36bba773f8911042b6d2561b57c'"
	Aug 07 18:25:10 functional-100700 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 07 18:25:10 functional-100700 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 07 18:25:10 functional-100700 systemd[1]: Failed to start Docker Application Container Engine.
	Aug 07 18:25:11 functional-100700 systemd[1]: docker.service: Scheduled restart job, restart counter is at 6.
	Aug 07 18:25:11 functional-100700 systemd[1]: Stopped Docker Application Container Engine.
	Aug 07 18:25:11 functional-100700 systemd[1]: Starting Docker Application Container Engine...
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-08-07T18:25:13Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.315437] systemd-fstab-generator[4011]: Ignoring "noauto" option for root device
	[  +5.331377] kauditd_printk_skb: 89 callbacks suppressed
	[  +8.070413] systemd-fstab-generator[4649]: Ignoring "noauto" option for root device
	[  +0.207492] systemd-fstab-generator[4660]: Ignoring "noauto" option for root device
	[  +0.210288] systemd-fstab-generator[4672]: Ignoring "noauto" option for root device
	[  +0.283847] systemd-fstab-generator[4687]: Ignoring "noauto" option for root device
	[  +0.989188] systemd-fstab-generator[4863]: Ignoring "noauto" option for root device
	[Aug 7 17:56] systemd-fstab-generator[4990]: Ignoring "noauto" option for root device
	[  +0.108824] kauditd_printk_skb: 137 callbacks suppressed
	[  +6.503684] kauditd_printk_skb: 52 callbacks suppressed
	[ +11.988991] systemd-fstab-generator[5884]: Ignoring "noauto" option for root device
	[  +0.145733] kauditd_printk_skb: 31 callbacks suppressed
	[Aug 7 17:59] systemd-fstab-generator[7536]: Ignoring "noauto" option for root device
	[  +0.151220] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.513441] systemd-fstab-generator[7585]: Ignoring "noauto" option for root device
	[  +0.282699] systemd-fstab-generator[7598]: Ignoring "noauto" option for root device
	[  +0.340219] systemd-fstab-generator[7612]: Ignoring "noauto" option for root device
	[  +5.344642] kauditd_printk_skb: 89 callbacks suppressed
	[Aug 7 18:12] systemd-fstab-generator[11104]: Ignoring "noauto" option for root device
	[Aug 7 18:13] systemd-fstab-generator[11514]: Ignoring "noauto" option for root device
	[  +0.128359] kauditd_printk_skb: 12 callbacks suppressed
	[Aug 7 18:17] systemd-fstab-generator[12657]: Ignoring "noauto" option for root device
	[  +0.123972] kauditd_printk_skb: 12 callbacks suppressed
	[Aug 7 18:18] systemd-fstab-generator[13086]: Ignoring "noauto" option for root device
	[  +0.146348] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 18:26:11 up 35 min,  0 users,  load average: 0.02, 0.05, 0.06
	Linux functional-100700 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Aug 07 18:26:06 functional-100700 kubelet[4998]: E0807 18:26:06.096650    4998 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-100700\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-100700?timeout=10s\": dial tcp 172.28.235.211:8441: connect: connection refused"
	Aug 07 18:26:06 functional-100700 kubelet[4998]: E0807 18:26:06.098013    4998 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-100700\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-100700?timeout=10s\": dial tcp 172.28.235.211:8441: connect: connection refused"
	Aug 07 18:26:06 functional-100700 kubelet[4998]: E0807 18:26:06.099480    4998 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-100700\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-100700?timeout=10s\": dial tcp 172.28.235.211:8441: connect: connection refused"
	Aug 07 18:26:06 functional-100700 kubelet[4998]: E0807 18:26:06.100704    4998 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-100700\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-100700?timeout=10s\": dial tcp 172.28.235.211:8441: connect: connection refused"
	Aug 07 18:26:06 functional-100700 kubelet[4998]: E0807 18:26:06.100810    4998 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	Aug 07 18:26:09 functional-100700 kubelet[4998]: E0807 18:26:09.324951    4998 kubelet.go:2370] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 26m16.067777474s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/version\": read unix @->/run/docker.sock: read: connection reset by peer]"
	Aug 07 18:26:11 functional-100700 kubelet[4998]: E0807 18:26:11.129331    4998 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-100700?timeout=10s\": dial tcp 172.28.235.211:8441: connect: connection refused" interval="7s"
	Aug 07 18:26:11 functional-100700 kubelet[4998]: E0807 18:26:11.187595    4998 remote_image.go:232] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:26:11 functional-100700 kubelet[4998]: E0807 18:26:11.187690    4998 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:26:11 functional-100700 kubelet[4998]: E0807 18:26:11.188213    4998 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Aug 07 18:26:11 functional-100700 kubelet[4998]: E0807 18:26:11.191456    4998 container_log_manager.go:194] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:26:11 functional-100700 kubelet[4998]: E0807 18:26:11.194132    4998 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Aug 07 18:26:11 functional-100700 kubelet[4998]: E0807 18:26:11.194167    4998 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:26:11 functional-100700 kubelet[4998]: I0807 18:26:11.194181    4998 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:26:11 functional-100700 kubelet[4998]: E0807 18:26:11.194208    4998 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Aug 07 18:26:11 functional-100700 kubelet[4998]: E0807 18:26:11.194222    4998 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:26:11 functional-100700 kubelet[4998]: I0807 18:26:11.194233    4998 image_gc_manager.go:214] "Failed to monitor images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:26:11 functional-100700 kubelet[4998]: E0807 18:26:11.194262    4998 remote_runtime.go:294] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Aug 07 18:26:11 functional-100700 kubelet[4998]: E0807 18:26:11.194292    4998 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:26:11 functional-100700 kubelet[4998]: E0807 18:26:11.194308    4998 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:26:11 functional-100700 kubelet[4998]: E0807 18:26:11.194335    4998 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Aug 07 18:26:11 functional-100700 kubelet[4998]: E0807 18:26:11.194385    4998 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 07 18:26:11 functional-100700 kubelet[4998]: E0807 18:26:11.196014    4998 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Aug 07 18:26:11 functional-100700 kubelet[4998]: E0807 18:26:11.196185    4998 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Aug 07 18:26:11 functional-100700 kubelet[4998]: E0807 18:26:11.197196    4998 kubelet.go:1436] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0807 18:23:40.339933    3608 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0807 18:24:10.621776    3608 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0807 18:24:10.663992    3608 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0807 18:24:10.713306    3608 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0807 18:24:10.755937    3608 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0807 18:24:10.805587    3608 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0807 18:25:10.962693    3608 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0807 18:25:11.004099    3608 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0807 18:25:11.052275    3608 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-100700 -n functional-100700
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-100700 -n functional-100700: exit status 2 (12.6611461s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0807 18:26:11.670033    8512 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-100700" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/NodeLabels (179.50s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (477.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell
functional_test.go:495: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-100700 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-100700"
functional_test.go:495: (dbg) Non-zero exit: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-100700 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-100700": exit status 1 (7m57.4736495s)

                                                
                                                
** stderr ** 
	W0807 18:12:14.583873    7720 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to MK_DOCKER_SCRIPT: Error generating set output: write /dev/stdout: The pipe is being closed.
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube_docker-env_e7a87817879750ae3d8d73c11fc2625d0ca04f2f_9.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	E0807 18:20:09.877237    7720 out.go:190] Fprintf failed: write /dev/stdout: The pipe is being closed.

                                                
                                                
** /stderr **
functional_test.go:498: failed to run the command by deadline. exceeded timeout. powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-100700 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-100700"
functional_test.go:501: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctional/parallel/DockerEnv/powershell (477.48s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (8.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-100700 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-100700 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 103. stderr: W0807 18:13:30.098213    1548 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0807 18:13:30.197999    1548 out.go:291] Setting OutFile to fd 1128 ...
I0807 18:13:30.213277    1548 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0807 18:13:30.213320    1548 out.go:304] Setting ErrFile to fd 1236...
I0807 18:13:30.213384    1548 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0807 18:13:30.229111    1548 mustload.go:65] Loading cluster: functional-100700
I0807 18:13:30.230251    1548 config.go:182] Loaded profile config "functional-100700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0807 18:13:30.231219    1548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
I0807 18:13:32.659375    1548 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0807 18:13:32.659460    1548 main.go:141] libmachine: [stderr =====>] : 
I0807 18:13:32.659460    1548 host.go:66] Checking if "functional-100700" exists ...
I0807 18:13:32.660584    1548 api_server.go:166] Checking apiserver status ...
I0807 18:13:32.675390    1548 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0807 18:13:32.675390    1548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
I0807 18:13:35.079974    1548 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0807 18:13:35.079974    1548 main.go:141] libmachine: [stderr =====>] : 
I0807 18:13:35.079974    1548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
I0807 18:13:37.889121    1548 main.go:141] libmachine: [stdout =====>] : 172.28.235.211

                                                
                                                
I0807 18:13:37.889121    1548 main.go:141] libmachine: [stderr =====>] : 
I0807 18:13:37.890062    1548 sshutil.go:53] new ssh client: &{IP:172.28.235.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-100700\id_rsa Username:docker}
I0807 18:13:38.004035    1548 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (5.3285767s)
W0807 18:13:38.004035    1548 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I0807 18:13:38.008147    1548 out.go:177] * The control-plane node functional-100700 apiserver is not running: (state=Stopped)
I0807 18:13:38.011427    1548 out.go:177]   To start a cluster, run: "minikube start -p functional-100700"

                                                
                                                
stdout: * The control-plane node functional-100700 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-100700"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-100700 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 13768: Access is denied.
functional_test_tunnel_test.go:194: (dbg) [out/minikube-windows-amd64.exe -p functional-100700 tunnel --alsologtostderr] stdout:
* The control-plane node functional-100700 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-100700"
functional_test_tunnel_test.go:194: (dbg) [out/minikube-windows-amd64.exe -p functional-100700 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-100700 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-windows-amd64.exe -p functional-100700 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-windows-amd64.exe -p functional-100700 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (8.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (4.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-100700 apply -f testdata\testsvc.yaml
functional_test_tunnel_test.go:212: (dbg) Non-zero exit: kubectl --context functional-100700 apply -f testdata\testsvc.yaml: exit status 1 (4.2261843s)

                                                
                                                
** stderr ** 
	error: error validating "testdata\\testsvc.yaml": error validating data: failed to download openapi: Get "https://172.28.235.211:8441/openapi/v2?timeout=32s": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:214: kubectl --context functional-100700 apply -f testdata\testsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (4.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (2.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-100700 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1435: (dbg) Non-zero exit: kubectl --context functional-100700 create deployment hello-node --image=registry.k8s.io/echoserver:1.8: exit status 1 (2.1467852s)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://172.28.235.211:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 172.28.235.211:8441: connectex: No connection could be made because the target machine actively refused it.

                                                
                                                
** /stderr **
functional_test.go:1439: failed to create hello-node deployment with this command "kubectl --context functional-100700 create deployment hello-node --image=registry.k8s.io/echoserver:1.8": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (2.16s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (7.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-100700 service list
functional_test.go:1455: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-100700 service list: exit status 103 (7.411687s)

                                                
                                                
-- stdout --
	* The control-plane node functional-100700 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-100700"

                                                
                                                
-- /stdout --
** stderr ** 
	W0807 18:17:24.365022   13016 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1457: failed to do service list. args "out/minikube-windows-amd64.exe -p functional-100700 service list" : exit status 103
functional_test.go:1460: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-100700 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-100700\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (7.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (7.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-100700 service list -o json
functional_test.go:1485: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-100700 service list -o json: exit status 103 (7.4485684s)

                                                
                                                
-- stdout --
	* The control-plane node functional-100700 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-100700"

                                                
                                                
-- /stdout --
** stderr ** 
	W0807 18:17:31.775960    1724 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1487: failed to list services with json format. args "out/minikube-windows-amd64.exe -p functional-100700 service list -o json": exit status 103
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (7.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (7.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-100700 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-100700 service --namespace=default --https --url hello-node: exit status 103 (7.573904s)

                                                
                                                
-- stdout --
	* The control-plane node functional-100700 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-100700"

                                                
                                                
-- /stdout --
** stderr ** 
	W0807 18:17:39.225629   12900 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1507: failed to get service url. args "out/minikube-windows-amd64.exe -p functional-100700 service --namespace=default --https --url hello-node" : exit status 103
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (7.58s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (7.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-100700 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-100700 service hello-node --url --format={{.IP}}: exit status 103 (7.6611457s)

                                                
                                                
-- stdout --
	* The control-plane node functional-100700 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-100700"

                                                
                                                
-- /stdout --
** stderr ** 
	W0807 18:17:46.804154    9396 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1538: failed to get service url with custom format. args "out/minikube-windows-amd64.exe -p functional-100700 service hello-node --url --format={{.IP}}": exit status 103
functional_test.go:1544: "* The control-plane node functional-100700 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-100700\"" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (7.66s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (7.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-100700 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-100700 service hello-node --url: exit status 103 (7.6686984s)

                                                
                                                
-- stdout --
	* The control-plane node functional-100700 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-100700"

                                                
                                                
-- /stdout --
** stderr ** 
	W0807 18:17:54.467048    6232 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1557: failed to get service url. args: "out/minikube-windows-amd64.exe -p functional-100700 service hello-node --url": exit status 103
functional_test.go:1561: found endpoint for hello-node: * The control-plane node functional-100700 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-100700"
functional_test.go:1565: failed to parse "* The control-plane node functional-100700 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-100700\"": parse "* The control-plane node functional-100700 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-100700\"": net/url: invalid control character in URL
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (7.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (60.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-100700 image ls --format short --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-100700 image ls --format short --alsologtostderr: (1m0.0316063s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-100700 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-100700 image ls --format short --alsologtostderr:
W0807 18:28:12.031381   10268 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0807 18:28:12.167390   10268 out.go:291] Setting OutFile to fd 1216 ...
I0807 18:28:12.168401   10268 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0807 18:28:12.168401   10268 out.go:304] Setting ErrFile to fd 1268...
I0807 18:28:12.168401   10268 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0807 18:28:12.188526   10268 config.go:182] Loaded profile config "functional-100700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0807 18:28:12.189020   10268 config.go:182] Loaded profile config "functional-100700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0807 18:28:12.190024   10268 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
I0807 18:28:15.141622   10268 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0807 18:28:15.141622   10268 main.go:141] libmachine: [stderr =====>] : 
I0807 18:28:15.161116   10268 ssh_runner.go:195] Run: systemctl --version
I0807 18:28:15.161116   10268 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
I0807 18:28:17.697039   10268 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0807 18:28:17.697039   10268 main.go:141] libmachine: [stderr =====>] : 
I0807 18:28:17.698087   10268 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
I0807 18:28:20.694651   10268 main.go:141] libmachine: [stdout =====>] : 172.28.235.211

                                                
                                                
I0807 18:28:20.694734   10268 main.go:141] libmachine: [stderr =====>] : 
I0807 18:28:20.695082   10268 sshutil.go:53] new ssh client: &{IP:172.28.235.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-100700\id_rsa Username:docker}
I0807 18:28:20.800610   10268 ssh_runner.go:235] Completed: systemctl --version: (5.6393194s)
I0807 18:28:20.812023   10268 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0807 18:29:11.868570   10268 ssh_runner.go:235] Completed: docker images --no-trunc --format "{{json .}}": (51.0558353s)
W0807 18:29:11.868637   10268 cache_images.go:721] Failed to list images for profile functional-100700 docker images: docker images --no-trunc --format "{{json .}}": Process exited with status 1
stdout:

                                                
                                                
stderr:
error during connect: Head "http://%2Fvar%2Frun%2Fdocker.sock/_ping": read unix @->/var/run/docker.sock: read: connection reset by peer
functional_test.go:274: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (60.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (59.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-100700 image ls --format table --alsologtostderr
functional_test.go:260: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-100700 image ls --format table --alsologtostderr: exit status 1 (59.4328332s)

                                                
                                                
** stderr ** 
	W0807 18:29:12.032338    1168 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0807 18:29:12.121347    1168 out.go:291] Setting OutFile to fd 1300 ...
	I0807 18:29:12.122127    1168 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 18:29:12.122127    1168 out.go:304] Setting ErrFile to fd 672...
	I0807 18:29:12.122127    1168 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 18:29:12.141364    1168 config.go:182] Loaded profile config "functional-100700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 18:29:12.142201    1168 config.go:182] Loaded profile config "functional-100700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 18:29:12.142628    1168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 18:29:14.459129    1168 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:29:14.459240    1168 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:29:14.474069    1168 ssh_runner.go:195] Run: systemctl --version
	I0807 18:29:14.474069    1168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 18:29:16.732988    1168 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:29:16.733051    1168 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:29:16.733114    1168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 18:29:19.392569    1168 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 18:29:19.392569    1168 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:29:19.392651    1168 sshutil.go:53] new ssh client: &{IP:172.28.235.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-100700\id_rsa Username:docker}
	I0807 18:29:19.483927    1168 ssh_runner.go:235] Completed: systemctl --version: (5.0097943s)
	I0807 18:29:19.494649    1168 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"

                                                
                                                
** /stderr **
functional_test.go:262: listing image with minikube: exit status 1

                                                
                                                
** stderr ** 
	W0807 18:29:12.032338    1168 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0807 18:29:12.121347    1168 out.go:291] Setting OutFile to fd 1300 ...
	I0807 18:29:12.122127    1168 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 18:29:12.122127    1168 out.go:304] Setting ErrFile to fd 672...
	I0807 18:29:12.122127    1168 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 18:29:12.141364    1168 config.go:182] Loaded profile config "functional-100700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 18:29:12.142201    1168 config.go:182] Loaded profile config "functional-100700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 18:29:12.142628    1168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 18:29:14.459129    1168 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:29:14.459240    1168 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:29:14.474069    1168 ssh_runner.go:195] Run: systemctl --version
	I0807 18:29:14.474069    1168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
	I0807 18:29:16.732988    1168 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:29:16.733051    1168 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:29:16.733114    1168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
	I0807 18:29:19.392569    1168 main.go:141] libmachine: [stdout =====>] : 172.28.235.211
	
	I0807 18:29:19.392569    1168 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:29:19.392651    1168 sshutil.go:53] new ssh client: &{IP:172.28.235.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-100700\id_rsa Username:docker}
	I0807 18:29:19.483927    1168 ssh_runner.go:235] Completed: systemctl --version: (5.0097943s)
	I0807 18:29:19.494649    1168 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListTable (59.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (45.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-100700 image ls --format json --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-100700 image ls --format json --alsologtostderr: (45.9390384s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-100700 image ls --format json --alsologtostderr:
[]
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-100700 image ls --format json --alsologtostderr:
W0807 18:28:26.158890    9804 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0807 18:28:26.245596    9804 out.go:291] Setting OutFile to fd 856 ...
I0807 18:28:26.264035    9804 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0807 18:28:26.264035    9804 out.go:304] Setting ErrFile to fd 1056...
I0807 18:28:26.264035    9804 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0807 18:28:26.285079    9804 config.go:182] Loaded profile config "functional-100700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0807 18:28:26.285986    9804 config.go:182] Loaded profile config "functional-100700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0807 18:28:26.287003    9804 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
I0807 18:28:28.558400    9804 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0807 18:28:28.558400    9804 main.go:141] libmachine: [stderr =====>] : 
I0807 18:28:28.573352    9804 ssh_runner.go:195] Run: systemctl --version
I0807 18:28:28.573352    9804 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
I0807 18:28:30.827064    9804 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0807 18:28:30.827153    9804 main.go:141] libmachine: [stderr =====>] : 
I0807 18:28:30.827203    9804 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
I0807 18:28:33.414073    9804 main.go:141] libmachine: [stdout =====>] : 172.28.235.211

                                                
                                                
I0807 18:28:33.414073    9804 main.go:141] libmachine: [stderr =====>] : 
I0807 18:28:33.415080    9804 sshutil.go:53] new ssh client: &{IP:172.28.235.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-100700\id_rsa Username:docker}
I0807 18:28:33.508428    9804 ssh_runner.go:235] Completed: systemctl --version: (4.935014s)
I0807 18:28:33.518720    9804 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0807 18:29:11.889370    9804 ssh_runner.go:235] Completed: docker images --no-trunc --format "{{json .}}": (38.3701627s)
W0807 18:29:11.889574    9804 cache_images.go:721] Failed to list images for profile functional-100700 docker images: docker images --no-trunc --format "{{json .}}": Process exited with status 1
stdout:

                                                
                                                
stderr:
error during connect: Head "http://%2Fvar%2Frun%2Fdocker.sock/_ping": read unix @->/var/run/docker.sock: read: connection reset by peer
functional_test.go:274: expected ["registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListJson (45.94s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (60.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-100700 image ls --format yaml --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-100700 image ls --format yaml --alsologtostderr: (1m0.0666102s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-100700 image ls --format yaml --alsologtostderr:
[]

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-100700 image ls --format yaml --alsologtostderr:
W0807 18:28:12.029371   13388 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0807 18:28:12.167390   13388 out.go:291] Setting OutFile to fd 1360 ...
I0807 18:28:12.168401   13388 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0807 18:28:12.168401   13388 out.go:304] Setting ErrFile to fd 1084...
I0807 18:28:12.168401   13388 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0807 18:28:12.204724   13388 config.go:182] Loaded profile config "functional-100700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0807 18:28:12.204922   13388 config.go:182] Loaded profile config "functional-100700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0807 18:28:12.205926   13388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
I0807 18:28:15.184683   13388 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0807 18:28:15.184683   13388 main.go:141] libmachine: [stderr =====>] : 
I0807 18:28:15.199753   13388 ssh_runner.go:195] Run: systemctl --version
I0807 18:28:15.199753   13388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
I0807 18:28:17.765349   13388 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0807 18:28:17.765349   13388 main.go:141] libmachine: [stderr =====>] : 
I0807 18:28:17.765349   13388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
I0807 18:28:20.780722   13388 main.go:141] libmachine: [stdout =====>] : 172.28.235.211

                                                
                                                
I0807 18:28:20.780722   13388 main.go:141] libmachine: [stderr =====>] : 
I0807 18:28:20.780722   13388 sshutil.go:53] new ssh client: &{IP:172.28.235.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-100700\id_rsa Username:docker}
I0807 18:28:20.909467   13388 ssh_runner.go:235] Completed: systemctl --version: (5.7096421s)
I0807 18:28:20.923285   13388 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0807 18:29:11.879036   13388 ssh_runner.go:235] Completed: docker images --no-trunc --format "{{json .}}": (50.955104s)
W0807 18:29:11.879036   13388 cache_images.go:721] Failed to list images for profile functional-100700 docker images: docker images --no-trunc --format "{{json .}}": Process exited with status 1
stdout:

                                                
                                                
stderr:
error during connect: Head "http://%2Fvar%2Frun%2Fdocker.sock/_ping": read unix @->/var/run/docker.sock: read: connection reset by peer
functional_test.go:274: expected - registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListYaml (60.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (119.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-100700 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-100700 ssh pgrep buildkitd: exit status 1 (11.3278338s)

                                                
                                                
** stderr ** 
	W0807 18:28:12.030380    9544 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-100700 image build -t localhost/my-image:functional-100700 testdata\build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe -p functional-100700 image build -t localhost/my-image:functional-100700 testdata\build --alsologtostderr: (48.7347648s)
functional_test.go:322: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-100700 image build -t localhost/my-image:functional-100700 testdata\build --alsologtostderr:
W0807 18:28:23.318812    7548 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0807 18:28:23.402632    7548 out.go:291] Setting OutFile to fd 716 ...
I0807 18:28:23.425964    7548 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0807 18:28:23.425964    7548 out.go:304] Setting ErrFile to fd 1220...
I0807 18:28:23.426119    7548 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0807 18:28:23.446061    7548 config.go:182] Loaded profile config "functional-100700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0807 18:28:23.465570    7548 config.go:182] Loaded profile config "functional-100700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0807 18:28:23.466579    7548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
I0807 18:28:25.815938    7548 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0807 18:28:25.816011    7548 main.go:141] libmachine: [stderr =====>] : 
I0807 18:28:25.829509    7548 ssh_runner.go:195] Run: systemctl --version
I0807 18:28:25.829509    7548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-100700 ).state
I0807 18:28:28.109539    7548 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0807 18:28:28.109539    7548 main.go:141] libmachine: [stderr =====>] : 
I0807 18:28:28.109539    7548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-100700 ).networkadapters[0]).ipaddresses[0]
I0807 18:28:30.796975    7548 main.go:141] libmachine: [stdout =====>] : 172.28.235.211

                                                
                                                
I0807 18:28:30.796975    7548 main.go:141] libmachine: [stderr =====>] : 
I0807 18:28:30.798335    7548 sshutil.go:53] new ssh client: &{IP:172.28.235.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-100700\id_rsa Username:docker}
I0807 18:28:30.897687    7548 ssh_runner.go:235] Completed: systemctl --version: (5.0681139s)
I0807 18:28:30.897687    7548 build_images.go:161] Building image from path: C:\Users\jenkins.minikube6\AppData\Local\Temp\build.3940061115.tar
I0807 18:28:30.914836    7548 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0807 18:28:30.950041    7548 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3940061115.tar
I0807 18:28:30.958128    7548 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3940061115.tar: stat -c "%s %y" /var/lib/minikube/build/build.3940061115.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3940061115.tar': No such file or directory
I0807 18:28:30.958330    7548 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\AppData\Local\Temp\build.3940061115.tar --> /var/lib/minikube/build/build.3940061115.tar (3072 bytes)
I0807 18:28:31.031149    7548 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3940061115
I0807 18:28:31.066393    7548 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3940061115 -xf /var/lib/minikube/build/build.3940061115.tar
I0807 18:28:31.084970    7548 docker.go:360] Building image: /var/lib/minikube/build/build.3940061115
I0807 18:28:31.094928    7548 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-100700 /var/lib/minikube/build/build.3940061115
ERROR: error during connect: Head "http://%2Fvar%2Frun%2Fdocker.sock/_ping": read unix @->/var/run/docker.sock: read: connection reset by peer
I0807 18:29:11.888396    7548 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-100700 /var/lib/minikube/build/build.3940061115: (40.7929494s)
W0807 18:29:11.888396    7548 build_images.go:125] Failed to build image for profile functional-100700. make sure the profile is running. Docker build /var/lib/minikube/build/build.3940061115.tar: buildimage docker: docker build -t localhost/my-image:functional-100700 /var/lib/minikube/build/build.3940061115: Process exited with status 1
stdout:

                                                
                                                
stderr:
ERROR: error during connect: Head "http://%2Fvar%2Frun%2Fdocker.sock/_ping": read unix @->/var/run/docker.sock: read: connection reset by peer
I0807 18:29:11.888396    7548 build_images.go:133] succeeded building to: 
I0807 18:29:11.888396    7548 build_images.go:134] failed building to: functional-100700
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-100700 image ls
functional_test.go:447: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-100700 image ls: exit status 1 (59.3987215s)

                                                
                                                
** stderr ** 
	W0807 18:29:12.074319    4992 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:439: listing images: exit status 1

                                                
                                                
** stderr ** 
	W0807 18:29:12.074319    4992 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (119.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (104.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-100700 image load --daemon docker.io/kicbase/echo-server:functional-100700 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-windows-amd64.exe -p functional-100700 image load --daemon docker.io/kicbase/echo-server:functional-100700 --alsologtostderr: (44.092042s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-100700 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-100700 image ls: (1m0.1273752s)
functional_test.go:442: expected "docker.io/kicbase/echo-server:functional-100700" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (104.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (120.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-100700 image load --daemon docker.io/kicbase/echo-server:functional-100700 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-windows-amd64.exe -p functional-100700 image load --daemon docker.io/kicbase/echo-server:functional-100700 --alsologtostderr: (1m0.2295188s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-100700 image ls
E0807 18:21:23.647361    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\client.crt: The system cannot find the path specified.
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-100700 image ls: (1m0.2408971s)
functional_test.go:442: expected "docker.io/kicbase/echo-server:functional-100700" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (120.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (120.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:234: (dbg) Done: docker pull docker.io/kicbase/echo-server:latest: (1.0914767s)
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-100700
functional_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-100700 image load --daemon docker.io/kicbase/echo-server:functional-100700 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-windows-amd64.exe -p functional-100700 image load --daemon docker.io/kicbase/echo-server:functional-100700 --alsologtostderr: (58.9074211s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-100700 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-100700 image ls: (1m0.2468102s)
functional_test.go:442: expected "docker.io/kicbase/echo-server:functional-100700" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (120.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (60.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-100700 image save docker.io/kicbase/echo-server:functional-100700 C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-windows-amd64.exe -p functional-100700 image save docker.io/kicbase/echo-server:functional-100700 C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar --alsologtostderr: (1m0.3428043s)
functional_test.go:385: expected "C:\\jenkins\\workspace\\Hyper-V_Windows_integration\\echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (60.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-100700 image load C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar --alsologtostderr
functional_test.go:408: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-100700 image load C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar --alsologtostderr: exit status 80 (446.604ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0807 18:27:11.564383    1768 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0807 18:27:11.644369    1768 out.go:291] Setting OutFile to fd 1352 ...
	I0807 18:27:11.662671    1768 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 18:27:11.662671    1768 out.go:304] Setting ErrFile to fd 1356...
	I0807 18:27:11.662671    1768 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 18:27:11.680213    1768 config.go:182] Loaded profile config "functional-100700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 18:27:11.681064    1768 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\images\amd64\C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\images\amd64\C_\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar
	I0807 18:27:11.801728    1768 cache.go:107] acquiring lock: {Name:mkf95425a8915dbfb11d7c7d69d8a47644f0157a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 18:27:11.804639    1768 cache.go:96] cache image "C:\\jenkins\\workspace\\Hyper-V_Windows_integration\\echo-server-save.tar" -> "C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\cache\\images\\amd64\\C_\\jenkins\\workspace\\Hyper-V_Windows_integration\\echo-server-save.tar" took 123.5363ms
	I0807 18:27:11.809691    1768 out.go:177] 
	W0807 18:27:11.812501    1768 out.go:239] X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\cache\\images\\amd64\\C_\\jenkins\\workspace\\Hyper-V_Windows_integration\\echo-server-save.tar": parsing image ref name for C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar: could not parse reference: C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar
	X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\cache\\images\\amd64\\C_\\jenkins\\workspace\\Hyper-V_Windows_integration\\echo-server-save.tar": parsing image ref name for C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar: could not parse reference: C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar
	W0807 18:27:11.812501    1768 out.go:239] * 
	* 
	W0807 18:27:11.877444    1768 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube_image_bf8b5ea9b66d8bcd63802fc9426bafd81ca6940c_0.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube_image_bf8b5ea9b66d8bcd63802fc9426bafd81ca6940c_0.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0807 18:27:11.880789    1768 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:410: loading image into minikube from file: exit status 80

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0807 18:27:11.564383    1768 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0807 18:27:11.644369    1768 out.go:291] Setting OutFile to fd 1352 ...
	I0807 18:27:11.662671    1768 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 18:27:11.662671    1768 out.go:304] Setting ErrFile to fd 1356...
	I0807 18:27:11.662671    1768 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 18:27:11.680213    1768 config.go:182] Loaded profile config "functional-100700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 18:27:11.681064    1768 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\images\amd64\C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\images\amd64\C_\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar
	I0807 18:27:11.801728    1768 cache.go:107] acquiring lock: {Name:mkf95425a8915dbfb11d7c7d69d8a47644f0157a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 18:27:11.804639    1768 cache.go:96] cache image "C:\\jenkins\\workspace\\Hyper-V_Windows_integration\\echo-server-save.tar" -> "C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\cache\\images\\amd64\\C_\\jenkins\\workspace\\Hyper-V_Windows_integration\\echo-server-save.tar" took 123.5363ms
	I0807 18:27:11.809691    1768 out.go:177] 
	W0807 18:27:11.812501    1768 out.go:239] X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\cache\\images\\amd64\\C_\\jenkins\\workspace\\Hyper-V_Windows_integration\\echo-server-save.tar": parsing image ref name for C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar: could not parse reference: C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar
	X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\cache\\images\\amd64\\C_\\jenkins\\workspace\\Hyper-V_Windows_integration\\echo-server-save.tar": parsing image ref name for C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar: could not parse reference: C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar
	W0807 18:27:11.812501    1768 out.go:239] * 
	* 
	W0807 18:27:11.877444    1768 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube_image_bf8b5ea9b66d8bcd63802fc9426bafd81ca6940c_0.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube_image_bf8b5ea9b66d8bcd63802fc9426bafd81ca6940c_0.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0807 18:27:11.880789    1768 out.go:177] 

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (71.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-766300 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-766300 -- exec busybox-fc5497c4f-bjlr2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-766300 -- exec busybox-fc5497c4f-bjlr2 -- sh -c "ping -c 1 172.28.224.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-766300 -- exec busybox-fc5497c4f-bjlr2 -- sh -c "ping -c 1 172.28.224.1": exit status 1 (10.5444823s)

                                                
                                                
-- stdout --
	PING 172.28.224.1 (172.28.224.1): 56 data bytes
	
	--- 172.28.224.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0807 18:44:17.045597   12744 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.28.224.1) from pod (busybox-fc5497c4f-bjlr2): exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-766300 -- exec busybox-fc5497c4f-vzv8c -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-766300 -- exec busybox-fc5497c4f-vzv8c -- sh -c "ping -c 1 172.28.224.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-766300 -- exec busybox-fc5497c4f-vzv8c -- sh -c "ping -c 1 172.28.224.1": exit status 1 (10.5304651s)

                                                
                                                
-- stdout --
	PING 172.28.224.1 (172.28.224.1): 56 data bytes
	
	--- 172.28.224.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0807 18:44:28.120481    7464 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.28.224.1) from pod (busybox-fc5497c4f-vzv8c): exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-766300 -- exec busybox-fc5497c4f-wf2xw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-766300 -- exec busybox-fc5497c4f-wf2xw -- sh -c "ping -c 1 172.28.224.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-766300 -- exec busybox-fc5497c4f-wf2xw -- sh -c "ping -c 1 172.28.224.1": exit status 1 (10.4969685s)

                                                
                                                
-- stdout --
	PING 172.28.224.1 (172.28.224.1): 56 data bytes
	
	--- 172.28.224.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0807 18:44:39.155288   10380 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.28.224.1) from pod (busybox-fc5497c4f-wf2xw): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-766300 -n ha-766300
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-766300 -n ha-766300: (12.9902009s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-766300 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-766300 logs -n 25: (9.5259356s)
helpers_test.go:252: TestMultiControlPlane/serial/PingHostFromPods logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                 Args                 |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| image   | functional-100700                    | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:28 UTC | 07 Aug 24 18:29 UTC |
	|         | image ls --format json               |                   |                   |         |                     |                     |
	|         | --alsologtostderr                    |                   |                   |         |                     |                     |
	| image   | functional-100700                    | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:29 UTC |                     |
	|         | image ls --format table              |                   |                   |         |                     |                     |
	|         | --alsologtostderr                    |                   |                   |         |                     |                     |
	| image   | functional-100700 image ls           | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:29 UTC |                     |
	| delete  | -p functional-100700                 | functional-100700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:30 UTC | 07 Aug 24 18:31 UTC |
	| start   | -p ha-766300 --wait=true             | ha-766300         | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:31 UTC | 07 Aug 24 18:43 UTC |
	|         | --memory=2200 --ha                   |                   |                   |         |                     |                     |
	|         | -v=7 --alsologtostderr               |                   |                   |         |                     |                     |
	|         | --driver=hyperv                      |                   |                   |         |                     |                     |
	| kubectl | -p ha-766300 -- apply -f             | ha-766300         | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:44 UTC | 07 Aug 24 18:44 UTC |
	|         | ./testdata/ha/ha-pod-dns-test.yaml   |                   |                   |         |                     |                     |
	| kubectl | -p ha-766300 -- rollout status       | ha-766300         | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:44 UTC | 07 Aug 24 18:44 UTC |
	|         | deployment/busybox                   |                   |                   |         |                     |                     |
	| kubectl | -p ha-766300 -- get pods -o          | ha-766300         | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:44 UTC | 07 Aug 24 18:44 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |                   |         |                     |                     |
	| kubectl | -p ha-766300 -- get pods -o          | ha-766300         | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:44 UTC | 07 Aug 24 18:44 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |                   |                   |         |                     |                     |
	| kubectl | -p ha-766300 -- exec                 | ha-766300         | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:44 UTC | 07 Aug 24 18:44 UTC |
	|         | busybox-fc5497c4f-bjlr2 --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-766300 -- exec                 | ha-766300         | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:44 UTC | 07 Aug 24 18:44 UTC |
	|         | busybox-fc5497c4f-vzv8c --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-766300 -- exec                 | ha-766300         | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:44 UTC | 07 Aug 24 18:44 UTC |
	|         | busybox-fc5497c4f-wf2xw --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-766300 -- exec                 | ha-766300         | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:44 UTC | 07 Aug 24 18:44 UTC |
	|         | busybox-fc5497c4f-bjlr2 --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-766300 -- exec                 | ha-766300         | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:44 UTC | 07 Aug 24 18:44 UTC |
	|         | busybox-fc5497c4f-vzv8c --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-766300 -- exec                 | ha-766300         | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:44 UTC | 07 Aug 24 18:44 UTC |
	|         | busybox-fc5497c4f-wf2xw --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-766300 -- exec                 | ha-766300         | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:44 UTC | 07 Aug 24 18:44 UTC |
	|         | busybox-fc5497c4f-bjlr2 -- nslookup  |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-766300 -- exec                 | ha-766300         | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:44 UTC | 07 Aug 24 18:44 UTC |
	|         | busybox-fc5497c4f-vzv8c -- nslookup  |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-766300 -- exec                 | ha-766300         | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:44 UTC | 07 Aug 24 18:44 UTC |
	|         | busybox-fc5497c4f-wf2xw -- nslookup  |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-766300 -- get pods -o          | ha-766300         | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:44 UTC | 07 Aug 24 18:44 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |                   |                   |         |                     |                     |
	| kubectl | -p ha-766300 -- exec                 | ha-766300         | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:44 UTC | 07 Aug 24 18:44 UTC |
	|         | busybox-fc5497c4f-bjlr2              |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-766300 -- exec                 | ha-766300         | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:44 UTC |                     |
	|         | busybox-fc5497c4f-bjlr2 -- sh        |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.28.224.1            |                   |                   |         |                     |                     |
	| kubectl | -p ha-766300 -- exec                 | ha-766300         | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:44 UTC | 07 Aug 24 18:44 UTC |
	|         | busybox-fc5497c4f-vzv8c              |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-766300 -- exec                 | ha-766300         | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:44 UTC |                     |
	|         | busybox-fc5497c4f-vzv8c -- sh        |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.28.224.1            |                   |                   |         |                     |                     |
	| kubectl | -p ha-766300 -- exec                 | ha-766300         | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:44 UTC | 07 Aug 24 18:44 UTC |
	|         | busybox-fc5497c4f-wf2xw              |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-766300 -- exec                 | ha-766300         | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:44 UTC |                     |
	|         | busybox-fc5497c4f-wf2xw -- sh        |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.28.224.1            |                   |                   |         |                     |                     |
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/07 18:31:31
	Running on machine: minikube6
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0807 18:31:31.156543   12940 out.go:291] Setting OutFile to fd 540 ...
	I0807 18:31:31.157550   12940 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 18:31:31.157550   12940 out.go:304] Setting ErrFile to fd 1388...
	I0807 18:31:31.157550   12940 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 18:31:31.182223   12940 out.go:298] Setting JSON to false
	I0807 18:31:31.184906   12940 start.go:129] hostinfo: {"hostname":"minikube6","uptime":317420,"bootTime":1722738070,"procs":189,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4717 Build 19045.4717","kernelVersion":"10.0.19045.4717 Build 19045.4717","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0807 18:31:31.184906   12940 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0807 18:31:31.191000   12940 out.go:177] * [ha-766300] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4717 Build 19045.4717
	I0807 18:31:31.198231   12940 notify.go:220] Checking for updates...
	I0807 18:31:31.198784   12940 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0807 18:31:31.202150   12940 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0807 18:31:31.205041   12940 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0807 18:31:31.208112   12940 out.go:177]   - MINIKUBE_LOCATION=19389
	I0807 18:31:31.210905   12940 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0807 18:31:31.214011   12940 driver.go:392] Setting default libvirt URI to qemu:///system
	I0807 18:31:36.661002   12940 out.go:177] * Using the hyperv driver based on user configuration
	I0807 18:31:36.665072   12940 start.go:297] selected driver: hyperv
	I0807 18:31:36.665072   12940 start.go:901] validating driver "hyperv" against <nil>
	I0807 18:31:36.665072   12940 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0807 18:31:36.710427   12940 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0807 18:31:36.710820   12940 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0807 18:31:36.710820   12940 cni.go:84] Creating CNI manager for ""
	I0807 18:31:36.710820   12940 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0807 18:31:36.710820   12940 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0807 18:31:36.710820   12940 start.go:340] cluster config:
	{Name:ha-766300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-766300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 18:31:36.711972   12940 iso.go:125] acquiring lock: {Name:mk51465eaa337f49a286b30986b5f3d5f63e6787 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 18:31:36.716381   12940 out.go:177] * Starting "ha-766300" primary control-plane node in "ha-766300" cluster
	I0807 18:31:36.720895   12940 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0807 18:31:36.721112   12940 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0807 18:31:36.721112   12940 cache.go:56] Caching tarball of preloaded images
	I0807 18:31:36.721605   12940 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0807 18:31:36.722009   12940 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0807 18:31:36.722701   12940 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\config.json ...
	I0807 18:31:36.722701   12940 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\config.json: {Name:mkd1789158757b6c59e145754941402c1d283541 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 18:31:36.723984   12940 start.go:360] acquireMachinesLock for ha-766300: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 18:31:36.723984   12940 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-766300"
	I0807 18:31:36.723984   12940 start.go:93] Provisioning new machine with config: &{Name:ha-766300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.3 ClusterName:ha-766300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0807 18:31:36.724594   12940 start.go:125] createHost starting for "" (driver="hyperv")
	I0807 18:31:36.728198   12940 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0807 18:31:36.729184   12940 start.go:159] libmachine.API.Create for "ha-766300" (driver="hyperv")
	I0807 18:31:36.729184   12940 client.go:168] LocalClient.Create starting
	I0807 18:31:36.729184   12940 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0807 18:31:36.729184   12940 main.go:141] libmachine: Decoding PEM data...
	I0807 18:31:36.729184   12940 main.go:141] libmachine: Parsing certificate...
	I0807 18:31:36.730520   12940 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0807 18:31:36.731040   12940 main.go:141] libmachine: Decoding PEM data...
	I0807 18:31:36.731108   12940 main.go:141] libmachine: Parsing certificate...
	I0807 18:31:36.731317   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0807 18:31:38.858067   12940 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0807 18:31:38.858067   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:31:38.858067   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0807 18:31:40.572678   12940 main.go:141] libmachine: [stdout =====>] : False
	
	I0807 18:31:40.572678   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:31:40.572918   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0807 18:31:42.099167   12940 main.go:141] libmachine: [stdout =====>] : True
	
	I0807 18:31:42.099427   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:31:42.099662   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0807 18:31:45.799162   12940 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0807 18:31:45.799162   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:31:45.802693   12940 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0807 18:31:46.294259   12940 main.go:141] libmachine: Creating SSH key...
	I0807 18:31:46.481137   12940 main.go:141] libmachine: Creating VM...
	I0807 18:31:46.482146   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0807 18:31:49.395155   12940 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0807 18:31:49.395663   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:31:49.395663   12940 main.go:141] libmachine: Using switch "Default Switch"
	I0807 18:31:49.395663   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0807 18:31:51.164524   12940 main.go:141] libmachine: [stdout =====>] : True
	
	I0807 18:31:51.164524   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:31:51.164524   12940 main.go:141] libmachine: Creating VHD
	I0807 18:31:51.164524   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-766300\fixed.vhd' -SizeBytes 10MB -Fixed
	I0807 18:31:54.998556   12940 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-766300\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 5FD98D4C-71F4-4FD4-915C-399CE8F6DEBE
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0807 18:31:54.998556   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:31:54.998556   12940 main.go:141] libmachine: Writing magic tar header
	I0807 18:31:54.998556   12940 main.go:141] libmachine: Writing SSH key tar header
	I0807 18:31:55.008940   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-766300\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-766300\disk.vhd' -VHDType Dynamic -DeleteSource
	I0807 18:31:58.352710   12940 main.go:141] libmachine: [stdout =====>] : 
	I0807 18:31:58.353419   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:31:58.353419   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-766300\disk.vhd' -SizeBytes 20000MB
	I0807 18:32:00.960255   12940 main.go:141] libmachine: [stdout =====>] : 
	I0807 18:32:00.960472   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:32:00.960582   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-766300 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-766300' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0807 18:32:04.721247   12940 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-766300 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0807 18:32:04.721247   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:32:04.721715   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-766300 -DynamicMemoryEnabled $false
	I0807 18:32:07.012924   12940 main.go:141] libmachine: [stdout =====>] : 
	I0807 18:32:07.013670   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:32:07.013767   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-766300 -Count 2
	I0807 18:32:09.261295   12940 main.go:141] libmachine: [stdout =====>] : 
	I0807 18:32:09.261814   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:32:09.261927   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-766300 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-766300\boot2docker.iso'
	I0807 18:32:11.944960   12940 main.go:141] libmachine: [stdout =====>] : 
	I0807 18:32:11.945505   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:32:11.945505   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-766300 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-766300\disk.vhd'
	I0807 18:32:14.663818   12940 main.go:141] libmachine: [stdout =====>] : 
	I0807 18:32:14.663818   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:32:14.663818   12940 main.go:141] libmachine: Starting VM...
	I0807 18:32:14.664422   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-766300
	I0807 18:32:17.860244   12940 main.go:141] libmachine: [stdout =====>] : 
	I0807 18:32:17.860489   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:32:17.860489   12940 main.go:141] libmachine: Waiting for host to start...
	I0807 18:32:17.860777   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300 ).state
	I0807 18:32:20.179792   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:32:20.179792   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:32:20.180603   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300 ).networkadapters[0]).ipaddresses[0]
	I0807 18:32:22.752388   12940 main.go:141] libmachine: [stdout =====>] : 
	I0807 18:32:22.752388   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:32:23.763322   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300 ).state
	I0807 18:32:26.093304   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:32:26.093304   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:32:26.094210   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300 ).networkadapters[0]).ipaddresses[0]
	I0807 18:32:28.787475   12940 main.go:141] libmachine: [stdout =====>] : 
	I0807 18:32:28.787475   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:32:29.802212   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300 ).state
	I0807 18:32:32.121773   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:32:32.121773   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:32:32.122184   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300 ).networkadapters[0]).ipaddresses[0]
	I0807 18:32:34.721267   12940 main.go:141] libmachine: [stdout =====>] : 
	I0807 18:32:34.721628   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:32:35.731816   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300 ).state
	I0807 18:32:38.005919   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:32:38.005919   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:32:38.005919   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300 ).networkadapters[0]).ipaddresses[0]
	I0807 18:32:40.621098   12940 main.go:141] libmachine: [stdout =====>] : 
	I0807 18:32:40.621098   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:32:41.624640   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300 ).state
	I0807 18:32:44.047429   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:32:44.047429   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:32:44.047429   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300 ).networkadapters[0]).ipaddresses[0]
	I0807 18:32:46.804544   12940 main.go:141] libmachine: [stdout =====>] : 172.28.224.88
	
	I0807 18:32:46.804652   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:32:46.804732   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300 ).state
	I0807 18:32:49.077933   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:32:49.078179   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:32:49.078179   12940 machine.go:94] provisionDockerMachine start ...
	I0807 18:32:49.078443   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300 ).state
	I0807 18:32:51.355441   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:32:51.355441   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:32:51.356261   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300 ).networkadapters[0]).ipaddresses[0]
	I0807 18:32:54.084730   12940 main.go:141] libmachine: [stdout =====>] : 172.28.224.88
	
	I0807 18:32:54.084730   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:32:54.094124   12940 main.go:141] libmachine: Using SSH client type: native
	I0807 18:32:54.105756   12940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.224.88 22 <nil> <nil>}
	I0807 18:32:54.106719   12940 main.go:141] libmachine: About to run SSH command:
	hostname
	I0807 18:32:54.239935   12940 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0807 18:32:54.239935   12940 buildroot.go:166] provisioning hostname "ha-766300"
	I0807 18:32:54.239935   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300 ).state
	I0807 18:32:56.489881   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:32:56.489881   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:32:56.489999   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300 ).networkadapters[0]).ipaddresses[0]
	I0807 18:32:59.201256   12940 main.go:141] libmachine: [stdout =====>] : 172.28.224.88
	
	I0807 18:32:59.201754   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:32:59.207968   12940 main.go:141] libmachine: Using SSH client type: native
	I0807 18:32:59.208679   12940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.224.88 22 <nil> <nil>}
	I0807 18:32:59.208679   12940 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-766300 && echo "ha-766300" | sudo tee /etc/hostname
	I0807 18:32:59.379815   12940 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-766300
	
	I0807 18:32:59.379923   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300 ).state
	I0807 18:33:01.638892   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:33:01.638892   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:33:01.638892   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300 ).networkadapters[0]).ipaddresses[0]
	I0807 18:33:04.381705   12940 main.go:141] libmachine: [stdout =====>] : 172.28.224.88
	
	I0807 18:33:04.381705   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:33:04.388668   12940 main.go:141] libmachine: Using SSH client type: native
	I0807 18:33:04.389450   12940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.224.88 22 <nil> <nil>}
	I0807 18:33:04.389450   12940 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-766300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-766300/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-766300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0807 18:33:04.545875   12940 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0807 18:33:04.545875   12940 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0807 18:33:04.545875   12940 buildroot.go:174] setting up certificates
	I0807 18:33:04.545875   12940 provision.go:84] configureAuth start
	I0807 18:33:04.545875   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300 ).state
	I0807 18:33:06.733541   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:33:06.733541   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:33:06.733740   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300 ).networkadapters[0]).ipaddresses[0]
	I0807 18:33:09.340737   12940 main.go:141] libmachine: [stdout =====>] : 172.28.224.88
	
	I0807 18:33:09.340737   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:33:09.341206   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300 ).state
	I0807 18:33:11.545499   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:33:11.545499   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:33:11.545499   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300 ).networkadapters[0]).ipaddresses[0]
	I0807 18:33:14.246402   12940 main.go:141] libmachine: [stdout =====>] : 172.28.224.88
	
	I0807 18:33:14.246402   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:33:14.247421   12940 provision.go:143] copyHostCerts
	I0807 18:33:14.247536   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0807 18:33:14.247536   12940 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0807 18:33:14.247536   12940 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0807 18:33:14.248457   12940 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0807 18:33:14.251015   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0807 18:33:14.251472   12940 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0807 18:33:14.251595   12940 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0807 18:33:14.252176   12940 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0807 18:33:14.253666   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0807 18:33:14.254097   12940 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0807 18:33:14.254213   12940 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0807 18:33:14.254564   12940 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0807 18:33:14.256334   12940 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-766300 san=[127.0.0.1 172.28.224.88 ha-766300 localhost minikube]
	I0807 18:33:14.405536   12940 provision.go:177] copyRemoteCerts
	I0807 18:33:14.417023   12940 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0807 18:33:14.417023   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300 ).state
	I0807 18:33:16.773166   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:33:16.773421   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:33:16.773504   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300 ).networkadapters[0]).ipaddresses[0]
	I0807 18:33:19.553634   12940 main.go:141] libmachine: [stdout =====>] : 172.28.224.88
	
	I0807 18:33:19.553634   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:33:19.555300   12940 sshutil.go:53] new ssh client: &{IP:172.28.224.88 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-766300\id_rsa Username:docker}
	I0807 18:33:19.661847   12940 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.2447573s)
	I0807 18:33:19.661847   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0807 18:33:19.661847   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0807 18:33:19.721678   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0807 18:33:19.722468   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes)
	I0807 18:33:19.772052   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0807 18:33:19.772850   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0807 18:33:19.826308   12940 provision.go:87] duration metric: took 15.2802376s to configureAuth
	I0807 18:33:19.826371   12940 buildroot.go:189] setting minikube options for container-runtime
	I0807 18:33:19.826590   12940 config.go:182] Loaded profile config "ha-766300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 18:33:19.826590   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300 ).state
	I0807 18:33:22.069848   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:33:22.069848   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:33:22.069848   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300 ).networkadapters[0]).ipaddresses[0]
	I0807 18:33:24.688397   12940 main.go:141] libmachine: [stdout =====>] : 172.28.224.88
	
	I0807 18:33:24.689022   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:33:24.694501   12940 main.go:141] libmachine: Using SSH client type: native
	I0807 18:33:24.695222   12940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.224.88 22 <nil> <nil>}
	I0807 18:33:24.695222   12940 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0807 18:33:24.821880   12940 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0807 18:33:24.821984   12940 buildroot.go:70] root file system type: tmpfs
	I0807 18:33:24.822078   12940 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0807 18:33:24.822266   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300 ).state
	I0807 18:33:27.051363   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:33:27.051632   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:33:27.051730   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300 ).networkadapters[0]).ipaddresses[0]
	I0807 18:33:29.675102   12940 main.go:141] libmachine: [stdout =====>] : 172.28.224.88
	
	I0807 18:33:29.675102   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:33:29.681273   12940 main.go:141] libmachine: Using SSH client type: native
	I0807 18:33:29.681998   12940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.224.88 22 <nil> <nil>}
	I0807 18:33:29.681998   12940 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0807 18:33:29.864108   12940 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0807 18:33:29.864108   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300 ).state
	I0807 18:33:32.038619   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:33:32.038619   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:33:32.038619   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300 ).networkadapters[0]).ipaddresses[0]
	I0807 18:33:34.658790   12940 main.go:141] libmachine: [stdout =====>] : 172.28.224.88
	
	I0807 18:33:34.658790   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:33:34.664637   12940 main.go:141] libmachine: Using SSH client type: native
	I0807 18:33:34.665390   12940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.224.88 22 <nil> <nil>}
	I0807 18:33:34.665390   12940 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0807 18:33:36.914340   12940 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0807 18:33:36.914531   12940 machine.go:97] duration metric: took 47.8357405s to provisionDockerMachine
	I0807 18:33:36.914588   12940 client.go:171] duration metric: took 2m0.1838657s to LocalClient.Create
	I0807 18:33:36.914588   12940 start.go:167] duration metric: took 2m0.1838657s to libmachine.API.Create "ha-766300"
	I0807 18:33:36.914661   12940 start.go:293] postStartSetup for "ha-766300" (driver="hyperv")
	I0807 18:33:36.914661   12940 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0807 18:33:36.929053   12940 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0807 18:33:36.929053   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300 ).state
	I0807 18:33:39.161162   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:33:39.161162   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:33:39.161162   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300 ).networkadapters[0]).ipaddresses[0]
	I0807 18:33:41.781457   12940 main.go:141] libmachine: [stdout =====>] : 172.28.224.88
	
	I0807 18:33:41.782249   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:33:41.782767   12940 sshutil.go:53] new ssh client: &{IP:172.28.224.88 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-766300\id_rsa Username:docker}
	I0807 18:33:41.888710   12940 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9595927s)
	I0807 18:33:41.899659   12940 ssh_runner.go:195] Run: cat /etc/os-release
	I0807 18:33:41.906576   12940 info.go:137] Remote host: Buildroot 2023.02.9
	I0807 18:33:41.906674   12940 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0807 18:33:41.906748   12940 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0807 18:33:41.908183   12940 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96602.pem -> 96602.pem in /etc/ssl/certs
	I0807 18:33:41.908250   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96602.pem -> /etc/ssl/certs/96602.pem
	I0807 18:33:41.920518   12940 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0807 18:33:41.941322   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96602.pem --> /etc/ssl/certs/96602.pem (1708 bytes)
	I0807 18:33:42.001613   12940 start.go:296] duration metric: took 5.0868869s for postStartSetup
	I0807 18:33:42.005172   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300 ).state
	I0807 18:33:44.208431   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:33:44.208843   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:33:44.208931   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300 ).networkadapters[0]).ipaddresses[0]
	I0807 18:33:46.810933   12940 main.go:141] libmachine: [stdout =====>] : 172.28.224.88
	
	I0807 18:33:46.811849   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:33:46.812072   12940 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\config.json ...
	I0807 18:33:46.815213   12940 start.go:128] duration metric: took 2m10.0889536s to createHost
	I0807 18:33:46.815213   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300 ).state
	I0807 18:33:49.003942   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:33:49.004858   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:33:49.004955   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300 ).networkadapters[0]).ipaddresses[0]
	I0807 18:33:51.585001   12940 main.go:141] libmachine: [stdout =====>] : 172.28.224.88
	
	I0807 18:33:51.585281   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:33:51.590192   12940 main.go:141] libmachine: Using SSH client type: native
	I0807 18:33:51.591350   12940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.224.88 22 <nil> <nil>}
	I0807 18:33:51.591350   12940 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0807 18:33:51.716241   12940 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723055631.720639473
	
	I0807 18:33:51.716241   12940 fix.go:216] guest clock: 1723055631.720639473
	I0807 18:33:51.716323   12940 fix.go:229] Guest: 2024-08-07 18:33:51.720639473 +0000 UTC Remote: 2024-08-07 18:33:46.8152135 +0000 UTC m=+135.816939601 (delta=4.905425973s)
	I0807 18:33:51.716323   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300 ).state
	I0807 18:33:53.900081   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:33:53.900081   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:33:53.901028   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300 ).networkadapters[0]).ipaddresses[0]
	I0807 18:33:56.494290   12940 main.go:141] libmachine: [stdout =====>] : 172.28.224.88
	
	I0807 18:33:56.495304   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:33:56.500826   12940 main.go:141] libmachine: Using SSH client type: native
	I0807 18:33:56.501571   12940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.224.88 22 <nil> <nil>}
	I0807 18:33:56.501571   12940 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1723055631
	I0807 18:33:56.635040   12940 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Aug  7 18:33:51 UTC 2024
	
	I0807 18:33:56.635040   12940 fix.go:236] clock set: Wed Aug  7 18:33:51 UTC 2024
	 (err=<nil>)
	I0807 18:33:56.635040   12940 start.go:83] releasing machines lock for "ha-766300", held for 2m19.9092657s
	I0807 18:33:56.635825   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300 ).state
	I0807 18:33:58.832313   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:33:58.832313   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:33:58.832579   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300 ).networkadapters[0]).ipaddresses[0]
	I0807 18:34:01.475567   12940 main.go:141] libmachine: [stdout =====>] : 172.28.224.88
	
	I0807 18:34:01.475567   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:34:01.479254   12940 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0807 18:34:01.479254   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300 ).state
	I0807 18:34:01.490388   12940 ssh_runner.go:195] Run: cat /version.json
	I0807 18:34:01.490388   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300 ).state
	I0807 18:34:03.793685   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:34:03.793685   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:34:03.793685   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300 ).networkadapters[0]).ipaddresses[0]
	I0807 18:34:03.807724   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:34:03.807724   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:34:03.807724   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300 ).networkadapters[0]).ipaddresses[0]
	I0807 18:34:06.538310   12940 main.go:141] libmachine: [stdout =====>] : 172.28.224.88
	
	I0807 18:34:06.538310   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:34:06.538788   12940 sshutil.go:53] new ssh client: &{IP:172.28.224.88 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-766300\id_rsa Username:docker}
	I0807 18:34:06.560228   12940 main.go:141] libmachine: [stdout =====>] : 172.28.224.88
	
	I0807 18:34:06.560228   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:34:06.561073   12940 sshutil.go:53] new ssh client: &{IP:172.28.224.88 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-766300\id_rsa Username:docker}
	I0807 18:34:06.628873   12940 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.1495531s)
	W0807 18:34:06.628873   12940 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0807 18:34:06.661675   12940 ssh_runner.go:235] Completed: cat /version.json: (5.1709464s)
	I0807 18:34:06.673034   12940 ssh_runner.go:195] Run: systemctl --version
	I0807 18:34:06.697358   12940 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0807 18:34:06.707310   12940 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0807 18:34:06.720291   12940 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	W0807 18:34:06.753257   12940 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0807 18:34:06.753257   12940 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0807 18:34:06.753657   12940 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0807 18:34:06.753775   12940 start.go:495] detecting cgroup driver to use...
	I0807 18:34:06.754076   12940 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0807 18:34:06.804595   12940 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0807 18:34:06.838249   12940 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0807 18:34:06.856752   12940 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0807 18:34:06.869936   12940 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0807 18:34:06.902253   12940 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0807 18:34:06.933553   12940 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0807 18:34:06.964327   12940 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0807 18:34:06.995523   12940 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0807 18:34:07.027350   12940 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0807 18:34:07.058217   12940 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0807 18:34:07.091529   12940 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0807 18:34:07.120536   12940 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0807 18:34:07.151359   12940 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0807 18:34:07.184582   12940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 18:34:07.400558   12940 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0807 18:34:07.435779   12940 start.go:495] detecting cgroup driver to use...
	I0807 18:34:07.447958   12940 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0807 18:34:07.487663   12940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0807 18:34:07.520376   12940 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0807 18:34:07.575738   12940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0807 18:34:07.607297   12940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0807 18:34:07.646244   12940 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0807 18:34:07.707574   12940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0807 18:34:07.732666   12940 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0807 18:34:07.778281   12940 ssh_runner.go:195] Run: which cri-dockerd
	I0807 18:34:07.796409   12940 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0807 18:34:07.812506   12940 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0807 18:34:07.854201   12940 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0807 18:34:08.068485   12940 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0807 18:34:08.259510   12940 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0807 18:34:08.259510   12940 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0807 18:34:08.307269   12940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 18:34:08.506023   12940 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0807 18:34:11.118054   12940 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6118509s)
	I0807 18:34:11.130256   12940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0807 18:34:11.169883   12940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0807 18:34:11.204088   12940 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0807 18:34:11.412805   12940 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0807 18:34:11.606832   12940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 18:34:11.810522   12940 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0807 18:34:11.852807   12940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0807 18:34:11.891013   12940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 18:34:12.106919   12940 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0807 18:34:12.219487   12940 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0807 18:34:12.231869   12940 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0807 18:34:12.242462   12940 start.go:563] Will wait 60s for crictl version
	I0807 18:34:12.254687   12940 ssh_runner.go:195] Run: which crictl
	I0807 18:34:12.271674   12940 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0807 18:34:12.330287   12940 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.1
	RuntimeApiVersion:  v1
	I0807 18:34:12.341736   12940 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0807 18:34:12.386561   12940 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0807 18:34:12.425940   12940 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...
	I0807 18:34:12.426251   12940 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0807 18:34:12.430247   12940 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0807 18:34:12.430247   12940 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0807 18:34:12.430247   12940 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0807 18:34:12.430247   12940 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:f6:3a:6a Flags:up|broadcast|multicast|running}
	I0807 18:34:12.432961   12940 ip.go:210] interface addr: fe80::e7eb:b592:d388:ff99/64
	I0807 18:34:12.432961   12940 ip.go:210] interface addr: 172.28.224.1/20
	I0807 18:34:12.447303   12940 ssh_runner.go:195] Run: grep 172.28.224.1	host.minikube.internal$ /etc/hosts
	I0807 18:34:12.453964   12940 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.28.224.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0807 18:34:12.488845   12940 kubeadm.go:883] updating cluster {Name:ha-766300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3
ClusterName:ha-766300 Namespace:default APIServerHAVIP:172.28.239.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.224.88 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0807 18:34:12.488845   12940 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0807 18:34:12.499657   12940 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0807 18:34:12.527730   12940 docker.go:685] Got preloaded images: 
	I0807 18:34:12.527730   12940 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.3 wasn't preloaded
	I0807 18:34:12.544382   12940 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0807 18:34:12.577901   12940 ssh_runner.go:195] Run: which lz4
	I0807 18:34:12.585033   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0807 18:34:12.596712   12940 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0807 18:34:12.603414   12940 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0807 18:34:12.603551   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359612007 bytes)
	I0807 18:34:14.980666   12940 docker.go:649] duration metric: took 2.3953474s to copy over tarball
	I0807 18:34:14.993942   12940 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0807 18:34:23.805676   12940 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.8116212s)
	I0807 18:34:23.805676   12940 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0807 18:34:23.868693   12940 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0807 18:34:23.886885   12940 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0807 18:34:23.930974   12940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 18:34:24.144244   12940 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0807 18:34:27.490789   12940 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.3465018s)
	I0807 18:34:27.500614   12940 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0807 18:34:27.530138   12940 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0807 18:34:27.530204   12940 cache_images.go:84] Images are preloaded, skipping loading
	I0807 18:34:27.530265   12940 kubeadm.go:934] updating node { 172.28.224.88 8443 v1.30.3 docker true true} ...
	I0807 18:34:27.530528   12940 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-766300 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.28.224.88
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-766300 Namespace:default APIServerHAVIP:172.28.239.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0807 18:34:27.539667   12940 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0807 18:34:27.608811   12940 cni.go:84] Creating CNI manager for ""
	I0807 18:34:27.608811   12940 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0807 18:34:27.608811   12940 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0807 18:34:27.608811   12940 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.28.224.88 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-766300 NodeName:ha-766300 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.28.224.88"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.28.224.88 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0807 18:34:27.609831   12940 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.28.224.88
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-766300"
	  kubeletExtraArgs:
	    node-ip: 172.28.224.88
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.28.224.88"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0807 18:34:27.609831   12940 kube-vip.go:115] generating kube-vip config ...
	I0807 18:34:27.621800   12940 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0807 18:34:27.647675   12940 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0807 18:34:27.647929   12940 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.28.239.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0807 18:34:27.659477   12940 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0807 18:34:27.675601   12940 binaries.go:44] Found k8s binaries, skipping transfer
	I0807 18:34:27.686462   12940 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0807 18:34:27.704463   12940 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0807 18:34:27.736773   12940 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0807 18:34:27.768830   12940 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0807 18:34:27.800636   12940 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0807 18:34:27.847978   12940 ssh_runner.go:195] Run: grep 172.28.239.254	control-plane.minikube.internal$ /etc/hosts
	I0807 18:34:27.854096   12940 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.28.239.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0807 18:34:27.888705   12940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 18:34:28.099801   12940 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0807 18:34:28.134589   12940 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300 for IP: 172.28.224.88
	I0807 18:34:28.134589   12940 certs.go:194] generating shared ca certs ...
	I0807 18:34:28.134656   12940 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 18:34:28.135397   12940 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0807 18:34:28.135844   12940 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0807 18:34:28.136036   12940 certs.go:256] generating profile certs ...
	I0807 18:34:28.136622   12940 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\client.key
	I0807 18:34:28.136622   12940 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\client.crt with IP's: []
	I0807 18:34:28.349075   12940 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\client.crt ...
	I0807 18:34:28.349075   12940 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\client.crt: {Name:mk8e2227ff939c73df9ce8c26a17f9ee0bfeb14e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 18:34:28.351039   12940 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\client.key ...
	I0807 18:34:28.351039   12940 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\client.key: {Name:mk9d63ee8d9eb9ecb007518cfee4f98e367f66bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 18:34:28.351366   12940 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\apiserver.key.3bcbab52
	I0807 18:34:28.352407   12940 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\apiserver.crt.3bcbab52 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.28.224.88 172.28.239.254]
	I0807 18:34:28.630425   12940 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\apiserver.crt.3bcbab52 ...
	I0807 18:34:28.630425   12940 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\apiserver.crt.3bcbab52: {Name:mke152e16ed39bf569fcdb17970a67302a92a3fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 18:34:28.632090   12940 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\apiserver.key.3bcbab52 ...
	I0807 18:34:28.632090   12940 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\apiserver.key.3bcbab52: {Name:mk7a3da08b84fc181e61ee4963380280cd45725a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 18:34:28.632090   12940 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\apiserver.crt.3bcbab52 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\apiserver.crt
	I0807 18:34:28.649153   12940 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\apiserver.key.3bcbab52 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\apiserver.key
	I0807 18:34:28.650590   12940 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\proxy-client.key
	I0807 18:34:28.650706   12940 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\proxy-client.crt with IP's: []
	I0807 18:34:29.173255   12940 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\proxy-client.crt ...
	I0807 18:34:29.173255   12940 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\proxy-client.crt: {Name:mke4c5cb8c20ed69c24c3bf8303d9fc9b1d9851c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 18:34:29.174265   12940 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\proxy-client.key ...
	I0807 18:34:29.174265   12940 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\proxy-client.key: {Name:mkd05b87576d91ba8935f3f6110ddcf438efe15e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 18:34:29.175838   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0807 18:34:29.176393   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0807 18:34:29.176610   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0807 18:34:29.176771   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0807 18:34:29.176771   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0807 18:34:29.176771   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0807 18:34:29.176771   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0807 18:34:29.186991   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0807 18:34:29.188068   12940 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9660.pem (1338 bytes)
	W0807 18:34:29.188667   12940 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9660_empty.pem, impossibly tiny 0 bytes
	I0807 18:34:29.188667   12940 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0807 18:34:29.189094   12940 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0807 18:34:29.189548   12940 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0807 18:34:29.189732   12940 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0807 18:34:29.190200   12940 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96602.pem (1708 bytes)
	I0807 18:34:29.190200   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0807 18:34:29.190767   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9660.pem -> /usr/share/ca-certificates/9660.pem
	I0807 18:34:29.191110   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96602.pem -> /usr/share/ca-certificates/96602.pem
	I0807 18:34:29.192106   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0807 18:34:29.246778   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0807 18:34:29.284463   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0807 18:34:29.322154   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0807 18:34:29.375620   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0807 18:34:29.421227   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0807 18:34:29.465800   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0807 18:34:29.515067   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0807 18:34:29.566214   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0807 18:34:29.611887   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9660.pem --> /usr/share/ca-certificates/9660.pem (1338 bytes)
	I0807 18:34:29.657571   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96602.pem --> /usr/share/ca-certificates/96602.pem (1708 bytes)
	I0807 18:34:29.703259   12940 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0807 18:34:29.749997   12940 ssh_runner.go:195] Run: openssl version
	I0807 18:34:29.774335   12940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9660.pem && ln -fs /usr/share/ca-certificates/9660.pem /etc/ssl/certs/9660.pem"
	I0807 18:34:29.804716   12940 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9660.pem
	I0807 18:34:29.811798   12940 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  7 17:50 /usr/share/ca-certificates/9660.pem
	I0807 18:34:29.823751   12940 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9660.pem
	I0807 18:34:29.843063   12940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9660.pem /etc/ssl/certs/51391683.0"
	I0807 18:34:29.877461   12940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/96602.pem && ln -fs /usr/share/ca-certificates/96602.pem /etc/ssl/certs/96602.pem"
	I0807 18:34:29.907546   12940 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/96602.pem
	I0807 18:34:29.915285   12940 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  7 17:50 /usr/share/ca-certificates/96602.pem
	I0807 18:34:29.927579   12940 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/96602.pem
	I0807 18:34:29.949144   12940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/96602.pem /etc/ssl/certs/3ec20f2e.0"
	I0807 18:34:29.980308   12940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0807 18:34:30.010191   12940 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0807 18:34:30.016644   12940 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  7 17:33 /usr/share/ca-certificates/minikubeCA.pem
	I0807 18:34:30.029944   12940 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0807 18:34:30.051012   12940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0807 18:34:30.083524   12940 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0807 18:34:30.090620   12940 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0807 18:34:30.090998   12940 kubeadm.go:392] StartCluster: {Name:ha-766300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clu
sterName:ha-766300 Namespace:default APIServerHAVIP:172.28.239.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.224.88 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 18:34:30.100012   12940 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0807 18:34:30.138140   12940 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0807 18:34:30.168933   12940 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0807 18:34:30.196415   12940 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0807 18:34:30.213493   12940 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0807 18:34:30.213672   12940 kubeadm.go:157] found existing configuration files:
	
	I0807 18:34:30.225925   12940 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0807 18:34:30.243980   12940 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0807 18:34:30.255593   12940 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0807 18:34:30.283231   12940 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0807 18:34:30.299827   12940 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0807 18:34:30.311414   12940 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0807 18:34:30.340369   12940 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0807 18:34:30.356612   12940 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0807 18:34:30.368550   12940 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0807 18:34:30.397906   12940 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0807 18:34:30.415774   12940 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0807 18:34:30.428365   12940 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0807 18:34:30.445176   12940 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0807 18:34:30.921603   12940 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0807 18:34:44.308650   12940 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0807 18:34:44.308650   12940 kubeadm.go:310] [preflight] Running pre-flight checks
	I0807 18:34:44.308650   12940 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0807 18:34:44.308650   12940 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0807 18:34:44.309932   12940 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0807 18:34:44.310083   12940 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0807 18:34:44.314465   12940 out.go:204]   - Generating certificates and keys ...
	I0807 18:34:44.314991   12940 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0807 18:34:44.315217   12940 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0807 18:34:44.315405   12940 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0807 18:34:44.315540   12940 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0807 18:34:44.315540   12940 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0807 18:34:44.315540   12940 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0807 18:34:44.315540   12940 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0807 18:34:44.315540   12940 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-766300 localhost] and IPs [172.28.224.88 127.0.0.1 ::1]
	I0807 18:34:44.316083   12940 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0807 18:34:44.316491   12940 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-766300 localhost] and IPs [172.28.224.88 127.0.0.1 ::1]
	I0807 18:34:44.316583   12940 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0807 18:34:44.316720   12940 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0807 18:34:44.316720   12940 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0807 18:34:44.317283   12940 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0807 18:34:44.317283   12940 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0807 18:34:44.317615   12940 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0807 18:34:44.317814   12940 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0807 18:34:44.317814   12940 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0807 18:34:44.317814   12940 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0807 18:34:44.317814   12940 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0807 18:34:44.318360   12940 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0807 18:34:44.322136   12940 out.go:204]   - Booting up control plane ...
	I0807 18:34:44.322369   12940 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0807 18:34:44.322641   12940 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0807 18:34:44.322837   12940 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0807 18:34:44.322927   12940 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0807 18:34:44.322927   12940 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0807 18:34:44.322927   12940 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0807 18:34:44.322927   12940 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0807 18:34:44.323937   12940 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0807 18:34:44.323937   12940 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 503.060662ms
	I0807 18:34:44.323937   12940 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0807 18:34:44.323937   12940 kubeadm.go:310] [api-check] The API server is healthy after 8.002038868s
	I0807 18:34:44.323937   12940 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0807 18:34:44.324926   12940 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0807 18:34:44.324926   12940 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0807 18:34:44.324926   12940 kubeadm.go:310] [mark-control-plane] Marking the node ha-766300 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0807 18:34:44.325587   12940 kubeadm.go:310] [bootstrap-token] Using token: flhfyh.589jacjrbykepsdi
	I0807 18:34:44.330287   12940 out.go:204]   - Configuring RBAC rules ...
	I0807 18:34:44.330467   12940 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0807 18:34:44.330637   12940 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0807 18:34:44.330987   12940 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0807 18:34:44.331316   12940 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0807 18:34:44.331662   12940 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0807 18:34:44.331845   12940 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0807 18:34:44.332124   12940 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0807 18:34:44.332178   12940 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0807 18:34:44.332349   12940 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0807 18:34:44.332349   12940 kubeadm.go:310] 
	I0807 18:34:44.332514   12940 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0807 18:34:44.332566   12940 kubeadm.go:310] 
	I0807 18:34:44.332783   12940 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0807 18:34:44.332839   12940 kubeadm.go:310] 
	I0807 18:34:44.332952   12940 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0807 18:34:44.333115   12940 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0807 18:34:44.333221   12940 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0807 18:34:44.333278   12940 kubeadm.go:310] 
	I0807 18:34:44.333475   12940 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0807 18:34:44.333530   12940 kubeadm.go:310] 
	I0807 18:34:44.333687   12940 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0807 18:34:44.333687   12940 kubeadm.go:310] 
	I0807 18:34:44.333805   12940 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0807 18:34:44.334082   12940 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0807 18:34:44.334312   12940 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0807 18:34:44.334370   12940 kubeadm.go:310] 
	I0807 18:34:44.334479   12940 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0807 18:34:44.334479   12940 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0807 18:34:44.334479   12940 kubeadm.go:310] 
	I0807 18:34:44.334479   12940 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token flhfyh.589jacjrbykepsdi \
	I0807 18:34:44.334479   12940 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:54111b4769238fc338d0511ba57c177f732a6d68c0b9cd0aa8cacbbf3c79643b \
	I0807 18:34:44.334479   12940 kubeadm.go:310] 	--control-plane 
	I0807 18:34:44.334479   12940 kubeadm.go:310] 
	I0807 18:34:44.334479   12940 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0807 18:34:44.334479   12940 kubeadm.go:310] 
	I0807 18:34:44.334479   12940 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token flhfyh.589jacjrbykepsdi \
	I0807 18:34:44.335783   12940 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:54111b4769238fc338d0511ba57c177f732a6d68c0b9cd0aa8cacbbf3c79643b 
	I0807 18:34:44.335837   12940 cni.go:84] Creating CNI manager for ""
	I0807 18:34:44.335837   12940 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0807 18:34:44.338995   12940 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0807 18:34:44.356191   12940 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0807 18:34:44.364164   12940 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0807 18:34:44.364224   12940 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0807 18:34:44.412094   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0807 18:34:45.102861   12940 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0807 18:34:45.117835   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:34:45.122600   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-766300 minikube.k8s.io/updated_at=2024_08_07T18_34_45_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e minikube.k8s.io/name=ha-766300 minikube.k8s.io/primary=true
	I0807 18:34:45.140335   12940 ops.go:34] apiserver oom_adj: -16
	I0807 18:34:45.343424   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:34:45.850343   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:34:46.346500   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:34:46.857865   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:34:47.356483   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:34:47.856485   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:34:48.361122   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:34:48.844631   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:34:49.358496   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:34:49.856859   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:34:50.346475   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:34:50.846848   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:34:51.350767   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:34:51.854577   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:34:52.351212   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:34:52.853267   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:34:53.355509   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:34:53.856961   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:34:54.346257   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:34:54.850142   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:34:55.356066   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:34:55.844393   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:34:56.347816   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:34:56.850566   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:34:57.349023   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:34:57.516203   12940 kubeadm.go:1113] duration metric: took 12.413183s to wait for elevateKubeSystemPrivileges
	I0807 18:34:57.516316   12940 kubeadm.go:394] duration metric: took 27.4249666s to StartCluster
	I0807 18:34:57.516316   12940 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 18:34:57.516316   12940 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0807 18:34:57.517390   12940 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 18:34:57.519005   12940 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0807 18:34:57.519188   12940 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.28.224.88 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0807 18:34:57.519188   12940 start.go:241] waiting for startup goroutines ...
	I0807 18:34:57.519188   12940 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0807 18:34:57.519375   12940 addons.go:69] Setting storage-provisioner=true in profile "ha-766300"
	I0807 18:34:57.519375   12940 addons.go:69] Setting default-storageclass=true in profile "ha-766300"
	I0807 18:34:57.519523   12940 addons.go:234] Setting addon storage-provisioner=true in "ha-766300"
	I0807 18:34:57.519523   12940 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-766300"
	I0807 18:34:57.519668   12940 host.go:66] Checking if "ha-766300" exists ...
	I0807 18:34:57.519742   12940 config.go:182] Loaded profile config "ha-766300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 18:34:57.520644   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300 ).state
	I0807 18:34:57.521153   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300 ).state
	I0807 18:34:57.731069   12940 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.28.224.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0807 18:34:58.429631   12940 start.go:971] {"host.minikube.internal": 172.28.224.1} host record injected into CoreDNS's ConfigMap
	I0807 18:34:59.923947   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:34:59.923947   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:34:59.924747   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:34:59.924747   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:34:59.925677   12940 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0807 18:34:59.926571   12940 kapi.go:59] client config for ha-766300: &rest.Config{Host:"https://172.28.239.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-766300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-766300\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1da64c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0807 18:34:59.927913   12940 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0807 18:34:59.928172   12940 cert_rotation.go:137] Starting client certificate rotation controller
	I0807 18:34:59.928547   12940 addons.go:234] Setting addon default-storageclass=true in "ha-766300"
	I0807 18:34:59.928547   12940 host.go:66] Checking if "ha-766300" exists ...
	I0807 18:34:59.929884   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300 ).state
	I0807 18:34:59.930712   12940 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0807 18:34:59.930712   12940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0807 18:34:59.930784   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300 ).state
	I0807 18:35:02.353664   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:35:02.353664   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:35:02.353785   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300 ).networkadapters[0]).ipaddresses[0]
	I0807 18:35:02.414526   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:35:02.414726   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:35:02.414726   12940 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0807 18:35:02.414895   12940 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0807 18:35:02.414981   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300 ).state
	I0807 18:35:04.881784   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:35:04.881784   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:35:04.882019   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300 ).networkadapters[0]).ipaddresses[0]
	I0807 18:35:05.237113   12940 main.go:141] libmachine: [stdout =====>] : 172.28.224.88
	
	I0807 18:35:05.237113   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:35:05.237113   12940 sshutil.go:53] new ssh client: &{IP:172.28.224.88 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-766300\id_rsa Username:docker}
	I0807 18:35:05.392764   12940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0807 18:35:07.610036   12940 main.go:141] libmachine: [stdout =====>] : 172.28.224.88
	
	I0807 18:35:07.610036   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:35:07.611362   12940 sshutil.go:53] new ssh client: &{IP:172.28.224.88 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-766300\id_rsa Username:docker}
	I0807 18:35:07.744490   12940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0807 18:35:07.905924   12940 round_trippers.go:463] GET https://172.28.239.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0807 18:35:07.905959   12940 round_trippers.go:469] Request Headers:
	I0807 18:35:07.905959   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:35:07.906011   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:35:07.921022   12940 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0807 18:35:07.922156   12940 round_trippers.go:463] PUT https://172.28.239.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0807 18:35:07.922156   12940 round_trippers.go:469] Request Headers:
	I0807 18:35:07.922156   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:35:07.922156   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:35:07.922156   12940 round_trippers.go:473]     Content-Type: application/json
	I0807 18:35:07.924751   12940 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 18:35:07.929005   12940 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0807 18:35:07.932966   12940 addons.go:510] duration metric: took 10.4136452s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0807 18:35:07.933137   12940 start.go:246] waiting for cluster config update ...
	I0807 18:35:07.933137   12940 start.go:255] writing updated cluster config ...
	I0807 18:35:07.936483   12940 out.go:177] 
	I0807 18:35:07.948348   12940 config.go:182] Loaded profile config "ha-766300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 18:35:07.948348   12940 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\config.json ...
	I0807 18:35:07.954332   12940 out.go:177] * Starting "ha-766300-m02" control-plane node in "ha-766300" cluster
	I0807 18:35:07.957520   12940 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0807 18:35:07.957520   12940 cache.go:56] Caching tarball of preloaded images
	I0807 18:35:07.957520   12940 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0807 18:35:07.958338   12940 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0807 18:35:07.958338   12940 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\config.json ...
	I0807 18:35:07.962344   12940 start.go:360] acquireMachinesLock for ha-766300-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 18:35:07.962344   12940 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-766300-m02"
	I0807 18:35:07.962629   12940 start.go:93] Provisioning new machine with config: &{Name:ha-766300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.3 ClusterName:ha-766300 Namespace:default APIServerHAVIP:172.28.239.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.224.88 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0807 18:35:07.962629   12940 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0807 18:35:07.964539   12940 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0807 18:35:07.965667   12940 start.go:159] libmachine.API.Create for "ha-766300" (driver="hyperv")
	I0807 18:35:07.965824   12940 client.go:168] LocalClient.Create starting
	I0807 18:35:07.966373   12940 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0807 18:35:07.966672   12940 main.go:141] libmachine: Decoding PEM data...
	I0807 18:35:07.966750   12940 main.go:141] libmachine: Parsing certificate...
	I0807 18:35:07.966938   12940 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0807 18:35:07.967301   12940 main.go:141] libmachine: Decoding PEM data...
	I0807 18:35:07.967301   12940 main.go:141] libmachine: Parsing certificate...
	I0807 18:35:07.967466   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0807 18:35:09.914327   12940 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0807 18:35:09.914969   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:35:09.915050   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0807 18:35:11.674199   12940 main.go:141] libmachine: [stdout =====>] : False
	
	I0807 18:35:11.674418   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:35:11.674418   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0807 18:35:13.195170   12940 main.go:141] libmachine: [stdout =====>] : True
	
	I0807 18:35:13.195170   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:35:13.195170   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0807 18:35:16.993620   12940 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0807 18:35:16.993620   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:35:16.996186   12940 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0807 18:35:17.460032   12940 main.go:141] libmachine: Creating SSH key...
	I0807 18:35:18.169685   12940 main.go:141] libmachine: Creating VM...
	I0807 18:35:18.169685   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0807 18:35:21.142745   12940 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0807 18:35:21.143721   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:35:21.143721   12940 main.go:141] libmachine: Using switch "Default Switch"
	I0807 18:35:21.143942   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0807 18:35:22.924019   12940 main.go:141] libmachine: [stdout =====>] : True
	
	I0807 18:35:22.924019   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:35:22.924019   12940 main.go:141] libmachine: Creating VHD
	I0807 18:35:22.924019   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-766300-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0807 18:35:26.845077   12940 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-766300-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 75EE8676-3085-4590-9428-31ED3F0D41FD
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0807 18:35:26.845077   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:35:26.845346   12940 main.go:141] libmachine: Writing magic tar header
	I0807 18:35:26.845346   12940 main.go:141] libmachine: Writing SSH key tar header
	I0807 18:35:26.856021   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-766300-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-766300-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0807 18:35:30.118066   12940 main.go:141] libmachine: [stdout =====>] : 
	I0807 18:35:30.118066   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:35:30.118201   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-766300-m02\disk.vhd' -SizeBytes 20000MB
	I0807 18:35:32.744624   12940 main.go:141] libmachine: [stdout =====>] : 
	I0807 18:35:32.744624   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:35:32.744624   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-766300-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-766300-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0807 18:35:36.478358   12940 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-766300-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0807 18:35:36.478358   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:35:36.478358   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-766300-m02 -DynamicMemoryEnabled $false
	I0807 18:35:38.780860   12940 main.go:141] libmachine: [stdout =====>] : 
	I0807 18:35:38.781192   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:35:38.781277   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-766300-m02 -Count 2
	I0807 18:35:41.045158   12940 main.go:141] libmachine: [stdout =====>] : 
	I0807 18:35:41.045158   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:35:41.045158   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-766300-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-766300-m02\boot2docker.iso'
	I0807 18:35:43.715991   12940 main.go:141] libmachine: [stdout =====>] : 
	I0807 18:35:43.715991   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:35:43.715991   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-766300-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-766300-m02\disk.vhd'
	I0807 18:35:46.471023   12940 main.go:141] libmachine: [stdout =====>] : 
	I0807 18:35:46.471335   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:35:46.471335   12940 main.go:141] libmachine: Starting VM...
	I0807 18:35:46.471335   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-766300-m02
	I0807 18:35:49.621276   12940 main.go:141] libmachine: [stdout =====>] : 
	I0807 18:35:49.621276   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:35:49.621276   12940 main.go:141] libmachine: Waiting for host to start...
	I0807 18:35:49.622264   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m02 ).state
	I0807 18:35:52.068195   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:35:52.068195   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:35:52.068195   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 18:35:54.706800   12940 main.go:141] libmachine: [stdout =====>] : 
	I0807 18:35:54.706800   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:35:55.722607   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m02 ).state
	I0807 18:35:58.046823   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:35:58.046823   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:35:58.047734   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 18:36:00.675050   12940 main.go:141] libmachine: [stdout =====>] : 
	I0807 18:36:00.675166   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:36:01.687903   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m02 ).state
	I0807 18:36:03.986191   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:36:03.986191   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:36:03.986960   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 18:36:06.649809   12940 main.go:141] libmachine: [stdout =====>] : 
	I0807 18:36:06.650812   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:36:07.655976   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m02 ).state
	I0807 18:36:09.954647   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:36:09.954647   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:36:09.954764   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 18:36:12.599802   12940 main.go:141] libmachine: [stdout =====>] : 
	I0807 18:36:12.599837   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:36:13.613994   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m02 ).state
	I0807 18:36:15.920350   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:36:15.920350   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:36:15.920940   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 18:36:18.594353   12940 main.go:141] libmachine: [stdout =====>] : 172.28.238.183
	
	I0807 18:36:18.594870   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:36:18.595201   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m02 ).state
	I0807 18:36:20.779069   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:36:20.779069   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:36:20.779069   12940 machine.go:94] provisionDockerMachine start ...
	I0807 18:36:20.779278   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m02 ).state
	I0807 18:36:23.058274   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:36:23.058274   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:36:23.058274   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 18:36:25.701346   12940 main.go:141] libmachine: [stdout =====>] : 172.28.238.183
	
	I0807 18:36:25.702625   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:36:25.708330   12940 main.go:141] libmachine: Using SSH client type: native
	I0807 18:36:25.723897   12940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.238.183 22 <nil> <nil>}
	I0807 18:36:25.723897   12940 main.go:141] libmachine: About to run SSH command:
	hostname
	I0807 18:36:25.857976   12940 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0807 18:36:25.857976   12940 buildroot.go:166] provisioning hostname "ha-766300-m02"
	I0807 18:36:25.857976   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m02 ).state
	I0807 18:36:28.087157   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:36:28.087157   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:36:28.087796   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 18:36:30.738124   12940 main.go:141] libmachine: [stdout =====>] : 172.28.238.183
	
	I0807 18:36:30.738313   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:36:30.743881   12940 main.go:141] libmachine: Using SSH client type: native
	I0807 18:36:30.744576   12940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.238.183 22 <nil> <nil>}
	I0807 18:36:30.744576   12940 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-766300-m02 && echo "ha-766300-m02" | sudo tee /etc/hostname
	I0807 18:36:30.907709   12940 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-766300-m02
	
	I0807 18:36:30.907709   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m02 ).state
	I0807 18:36:33.110411   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:36:33.110626   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:36:33.110736   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 18:36:35.765625   12940 main.go:141] libmachine: [stdout =====>] : 172.28.238.183
	
	I0807 18:36:35.765625   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:36:35.771496   12940 main.go:141] libmachine: Using SSH client type: native
	I0807 18:36:35.771743   12940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.238.183 22 <nil> <nil>}
	I0807 18:36:35.771743   12940 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-766300-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-766300-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-766300-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0807 18:36:35.929376   12940 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0807 18:36:35.929376   12940 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0807 18:36:35.929376   12940 buildroot.go:174] setting up certificates
	I0807 18:36:35.929376   12940 provision.go:84] configureAuth start
	I0807 18:36:35.929911   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m02 ).state
	I0807 18:36:38.100883   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:36:38.100883   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:36:38.100883   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 18:36:40.733289   12940 main.go:141] libmachine: [stdout =====>] : 172.28.238.183
	
	I0807 18:36:40.733289   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:36:40.733289   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m02 ).state
	I0807 18:36:42.965503   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:36:42.965503   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:36:42.965503   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 18:36:45.631472   12940 main.go:141] libmachine: [stdout =====>] : 172.28.238.183
	
	I0807 18:36:45.631727   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:36:45.631727   12940 provision.go:143] copyHostCerts
	I0807 18:36:45.631908   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0807 18:36:45.631908   12940 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0807 18:36:45.631908   12940 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0807 18:36:45.632745   12940 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0807 18:36:45.634128   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0807 18:36:45.634858   12940 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0807 18:36:45.635063   12940 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0807 18:36:45.636219   12940 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0807 18:36:45.638229   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0807 18:36:45.638229   12940 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0807 18:36:45.638229   12940 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0807 18:36:45.638848   12940 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0807 18:36:45.639533   12940 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-766300-m02 san=[127.0.0.1 172.28.238.183 ha-766300-m02 localhost minikube]
	I0807 18:36:45.783303   12940 provision.go:177] copyRemoteCerts
	I0807 18:36:45.795863   12940 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0807 18:36:45.795863   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m02 ).state
	I0807 18:36:48.045089   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:36:48.045089   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:36:48.045444   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 18:36:50.736902   12940 main.go:141] libmachine: [stdout =====>] : 172.28.238.183
	
	I0807 18:36:50.737115   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:36:50.737515   12940 sshutil.go:53] new ssh client: &{IP:172.28.238.183 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-766300-m02\id_rsa Username:docker}
	I0807 18:36:50.843622   12940 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.0476939s)
	I0807 18:36:50.843622   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0807 18:36:50.843622   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0807 18:36:50.890611   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0807 18:36:50.890611   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0807 18:36:50.936610   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0807 18:36:50.937084   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0807 18:36:50.983572   12940 provision.go:87] duration metric: took 15.0540031s to configureAuth
	I0807 18:36:50.983572   12940 buildroot.go:189] setting minikube options for container-runtime
	I0807 18:36:50.984602   12940 config.go:182] Loaded profile config "ha-766300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 18:36:50.984668   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m02 ).state
	I0807 18:36:53.184719   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:36:53.184719   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:36:53.185055   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 18:36:55.896096   12940 main.go:141] libmachine: [stdout =====>] : 172.28.238.183
	
	I0807 18:36:55.896096   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:36:55.904290   12940 main.go:141] libmachine: Using SSH client type: native
	I0807 18:36:55.904921   12940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.238.183 22 <nil> <nil>}
	I0807 18:36:55.904921   12940 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0807 18:36:56.046427   12940 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0807 18:36:56.046493   12940 buildroot.go:70] root file system type: tmpfs
	I0807 18:36:56.046710   12940 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0807 18:36:56.046785   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m02 ).state
	I0807 18:36:58.320104   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:36:58.320104   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:36:58.320915   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 18:37:01.020160   12940 main.go:141] libmachine: [stdout =====>] : 172.28.238.183
	
	I0807 18:37:01.020358   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:37:01.026610   12940 main.go:141] libmachine: Using SSH client type: native
	I0807 18:37:01.026857   12940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.238.183 22 <nil> <nil>}
	I0807 18:37:01.026857   12940 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.28.224.88"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0807 18:37:01.197751   12940 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.28.224.88
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0807 18:37:01.197751   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m02 ).state
	I0807 18:37:03.432200   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:37:03.432200   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:37:03.432837   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 18:37:06.109391   12940 main.go:141] libmachine: [stdout =====>] : 172.28.238.183
	
	I0807 18:37:06.109391   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:37:06.116193   12940 main.go:141] libmachine: Using SSH client type: native
	I0807 18:37:06.116941   12940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.238.183 22 <nil> <nil>}
	I0807 18:37:06.116995   12940 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0807 18:37:08.375183   12940 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0807 18:37:08.375183   12940 machine.go:97] duration metric: took 47.5955049s to provisionDockerMachine
	I0807 18:37:08.375263   12940 client.go:171] duration metric: took 2m0.4078981s to LocalClient.Create
	I0807 18:37:08.375263   12940 start.go:167] duration metric: took 2m0.4080549s to libmachine.API.Create "ha-766300"
	I0807 18:37:08.375373   12940 start.go:293] postStartSetup for "ha-766300-m02" (driver="hyperv")
	I0807 18:37:08.375410   12940 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0807 18:37:08.388818   12940 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0807 18:37:08.388818   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m02 ).state
	I0807 18:37:10.592319   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:37:10.592319   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:37:10.592564   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 18:37:13.281666   12940 main.go:141] libmachine: [stdout =====>] : 172.28.238.183
	
	I0807 18:37:13.281666   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:37:13.281666   12940 sshutil.go:53] new ssh client: &{IP:172.28.238.183 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-766300-m02\id_rsa Username:docker}
	I0807 18:37:13.391933   12940 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.0030511s)
	I0807 18:37:13.405868   12940 ssh_runner.go:195] Run: cat /etc/os-release
	I0807 18:37:13.412858   12940 info.go:137] Remote host: Buildroot 2023.02.9
	I0807 18:37:13.412858   12940 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0807 18:37:13.413027   12940 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0807 18:37:13.414724   12940 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96602.pem -> 96602.pem in /etc/ssl/certs
	I0807 18:37:13.414724   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96602.pem -> /etc/ssl/certs/96602.pem
	I0807 18:37:13.428993   12940 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0807 18:37:13.448043   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96602.pem --> /etc/ssl/certs/96602.pem (1708 bytes)
	I0807 18:37:13.490279   12940 start.go:296] duration metric: took 5.1148404s for postStartSetup
	I0807 18:37:13.494571   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m02 ).state
	I0807 18:37:15.694816   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:37:15.695087   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:37:15.695153   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 18:37:18.322088   12940 main.go:141] libmachine: [stdout =====>] : 172.28.238.183
	
	I0807 18:37:18.322342   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:37:18.322342   12940 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\config.json ...
	I0807 18:37:18.325052   12940 start.go:128] duration metric: took 2m10.3607544s to createHost
	I0807 18:37:18.325052   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m02 ).state
	I0807 18:37:20.540626   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:37:20.540626   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:37:20.540626   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 18:37:23.160702   12940 main.go:141] libmachine: [stdout =====>] : 172.28.238.183
	
	I0807 18:37:23.160845   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:37:23.166064   12940 main.go:141] libmachine: Using SSH client type: native
	I0807 18:37:23.166782   12940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.238.183 22 <nil> <nil>}
	I0807 18:37:23.166877   12940 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0807 18:37:23.300148   12940 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723055843.320326868
	
	I0807 18:37:23.300213   12940 fix.go:216] guest clock: 1723055843.320326868
	I0807 18:37:23.300314   12940 fix.go:229] Guest: 2024-08-07 18:37:23.320326868 +0000 UTC Remote: 2024-08-07 18:37:18.3250521 +0000 UTC m=+347.324070901 (delta=4.995274768s)
	I0807 18:37:23.300446   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m02 ).state
	I0807 18:37:25.496425   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:37:25.496425   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:37:25.496673   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 18:37:28.158274   12940 main.go:141] libmachine: [stdout =====>] : 172.28.238.183
	
	I0807 18:37:28.158274   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:37:28.164942   12940 main.go:141] libmachine: Using SSH client type: native
	I0807 18:37:28.165608   12940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.238.183 22 <nil> <nil>}
	I0807 18:37:28.165694   12940 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1723055843
	I0807 18:37:28.317585   12940 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Aug  7 18:37:23 UTC 2024
	
	I0807 18:37:28.317585   12940 fix.go:236] clock set: Wed Aug  7 18:37:23 UTC 2024
	 (err=<nil>)
	I0807 18:37:28.317585   12940 start.go:83] releasing machines lock for "ha-766300-m02", held for 2m20.3532422s
	I0807 18:37:28.318151   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m02 ).state
	I0807 18:37:30.547186   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:37:30.547186   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:37:30.547484   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 18:37:33.205575   12940 main.go:141] libmachine: [stdout =====>] : 172.28.238.183
	
	I0807 18:37:33.206611   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:37:33.209666   12940 out.go:177] * Found network options:
	I0807 18:37:33.212516   12940 out.go:177]   - NO_PROXY=172.28.224.88
	W0807 18:37:33.214784   12940 proxy.go:119] fail to check proxy env: Error ip not in block
	I0807 18:37:33.216848   12940 out.go:177]   - NO_PROXY=172.28.224.88
	W0807 18:37:33.218923   12940 proxy.go:119] fail to check proxy env: Error ip not in block
	W0807 18:37:33.221159   12940 proxy.go:119] fail to check proxy env: Error ip not in block
	I0807 18:37:33.222553   12940 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0807 18:37:33.223491   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m02 ).state
	I0807 18:37:33.232804   12940 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0807 18:37:33.232804   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m02 ).state
	I0807 18:37:35.483076   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:37:35.483235   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:37:35.483345   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 18:37:35.491004   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:37:35.491954   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:37:35.491954   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 18:37:38.237351   12940 main.go:141] libmachine: [stdout =====>] : 172.28.238.183
	
	I0807 18:37:38.238353   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:37:38.238723   12940 sshutil.go:53] new ssh client: &{IP:172.28.238.183 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-766300-m02\id_rsa Username:docker}
	I0807 18:37:38.261652   12940 main.go:141] libmachine: [stdout =====>] : 172.28.238.183
	
	I0807 18:37:38.261652   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:37:38.262051   12940 sshutil.go:53] new ssh client: &{IP:172.28.238.183 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-766300-m02\id_rsa Username:docker}
	I0807 18:37:38.333371   12940 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.1005017s)
	W0807 18:37:38.333491   12940 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0807 18:37:38.345580   12940 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0807 18:37:38.350756   12940 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.1281371s)
	W0807 18:37:38.350756   12940 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0807 18:37:38.378767   12940 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0807 18:37:38.378796   12940 start.go:495] detecting cgroup driver to use...
	I0807 18:37:38.378796   12940 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0807 18:37:38.430948   12940 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0807 18:37:38.468151   12940 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	W0807 18:37:38.476259   12940 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0807 18:37:38.476259   12940 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0807 18:37:38.492232   12940 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0807 18:37:38.506604   12940 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0807 18:37:38.539977   12940 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0807 18:37:38.571903   12940 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0807 18:37:38.602927   12940 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0807 18:37:38.637284   12940 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0807 18:37:38.670638   12940 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0807 18:37:38.705179   12940 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0807 18:37:38.739356   12940 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0807 18:37:38.774705   12940 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0807 18:37:38.806873   12940 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0807 18:37:38.840678   12940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 18:37:39.046613   12940 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0807 18:37:39.080927   12940 start.go:495] detecting cgroup driver to use...
	I0807 18:37:39.093528   12940 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0807 18:37:39.138387   12940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0807 18:37:39.177007   12940 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0807 18:37:39.222659   12940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0807 18:37:39.263855   12940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0807 18:37:39.301378   12940 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0807 18:37:39.356376   12940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0807 18:37:39.379997   12940 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0807 18:37:39.427757   12940 ssh_runner.go:195] Run: which cri-dockerd
	I0807 18:37:39.445554   12940 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0807 18:37:39.463279   12940 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0807 18:37:39.506445   12940 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0807 18:37:39.710452   12940 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0807 18:37:39.921210   12940 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0807 18:37:39.921338   12940 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0807 18:37:39.969683   12940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 18:37:40.175180   12940 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0807 18:37:42.775498   12940 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6002846s)
	I0807 18:37:42.787266   12940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0807 18:37:42.824462   12940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0807 18:37:42.858786   12940 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0807 18:37:43.055542   12940 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0807 18:37:43.263136   12940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 18:37:43.463366   12940 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0807 18:37:43.503986   12940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0807 18:37:43.537989   12940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 18:37:43.731198   12940 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0807 18:37:43.843990   12940 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0807 18:37:43.855144   12940 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0807 18:37:43.864429   12940 start.go:563] Will wait 60s for crictl version
	I0807 18:37:43.875289   12940 ssh_runner.go:195] Run: which crictl
	I0807 18:37:43.894303   12940 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0807 18:37:43.947656   12940 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.1
	RuntimeApiVersion:  v1
	I0807 18:37:43.955344   12940 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0807 18:37:44.001272   12940 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0807 18:37:44.041209   12940 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...
	I0807 18:37:44.044251   12940 out.go:177]   - env NO_PROXY=172.28.224.88
	I0807 18:37:44.047177   12940 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0807 18:37:44.051473   12940 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0807 18:37:44.051473   12940 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0807 18:37:44.051473   12940 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0807 18:37:44.051473   12940 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:f6:3a:6a Flags:up|broadcast|multicast|running}
	I0807 18:37:44.054209   12940 ip.go:210] interface addr: fe80::e7eb:b592:d388:ff99/64
	I0807 18:37:44.054209   12940 ip.go:210] interface addr: 172.28.224.1/20
	I0807 18:37:44.064219   12940 ssh_runner.go:195] Run: grep 172.28.224.1	host.minikube.internal$ /etc/hosts
	I0807 18:37:44.070307   12940 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.28.224.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0807 18:37:44.091956   12940 mustload.go:65] Loading cluster: ha-766300
	I0807 18:37:44.092908   12940 config.go:182] Loaded profile config "ha-766300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 18:37:44.093863   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300 ).state
	I0807 18:37:46.272572   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:37:46.272572   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:37:46.272958   12940 host.go:66] Checking if "ha-766300" exists ...
	I0807 18:37:46.273872   12940 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300 for IP: 172.28.238.183
	I0807 18:37:46.273872   12940 certs.go:194] generating shared ca certs ...
	I0807 18:37:46.273872   12940 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 18:37:46.274452   12940 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0807 18:37:46.274796   12940 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0807 18:37:46.274796   12940 certs.go:256] generating profile certs ...
	I0807 18:37:46.275524   12940 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\client.key
	I0807 18:37:46.275524   12940 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\apiserver.key.489d5c54
	I0807 18:37:46.275524   12940 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\apiserver.crt.489d5c54 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.28.224.88 172.28.238.183 172.28.239.254]
	I0807 18:37:46.512734   12940 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\apiserver.crt.489d5c54 ...
	I0807 18:37:46.512734   12940 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\apiserver.crt.489d5c54: {Name:mk4a736e66d978df518f4811a6b19be15d696196 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 18:37:46.514568   12940 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\apiserver.key.489d5c54 ...
	I0807 18:37:46.514568   12940 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\apiserver.key.489d5c54: {Name:mk835bd6912ea9cf8ea8bcda18b1c4d6981c24bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 18:37:46.515232   12940 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\apiserver.crt.489d5c54 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\apiserver.crt
	I0807 18:37:46.530105   12940 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\apiserver.key.489d5c54 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\apiserver.key
	I0807 18:37:46.531305   12940 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\proxy-client.key
	I0807 18:37:46.531863   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0807 18:37:46.531863   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0807 18:37:46.532158   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0807 18:37:46.532158   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0807 18:37:46.532476   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0807 18:37:46.532632   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0807 18:37:46.533507   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0807 18:37:46.533736   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0807 18:37:46.533736   12940 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9660.pem (1338 bytes)
	W0807 18:37:46.534314   12940 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9660_empty.pem, impossibly tiny 0 bytes
	I0807 18:37:46.534603   12940 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0807 18:37:46.534925   12940 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0807 18:37:46.535269   12940 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0807 18:37:46.535409   12940 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0807 18:37:46.535950   12940 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96602.pem (1708 bytes)
	I0807 18:37:46.536226   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96602.pem -> /usr/share/ca-certificates/96602.pem
	I0807 18:37:46.536226   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0807 18:37:46.536226   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9660.pem -> /usr/share/ca-certificates/9660.pem
	I0807 18:37:46.536226   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300 ).state
	I0807 18:37:48.740698   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:37:48.740698   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:37:48.741539   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300 ).networkadapters[0]).ipaddresses[0]
	I0807 18:37:51.392837   12940 main.go:141] libmachine: [stdout =====>] : 172.28.224.88
	
	I0807 18:37:51.393064   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:37:51.393470   12940 sshutil.go:53] new ssh client: &{IP:172.28.224.88 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-766300\id_rsa Username:docker}
	I0807 18:37:51.491083   12940 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0807 18:37:51.499745   12940 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0807 18:37:51.533372   12940 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0807 18:37:51.540914   12940 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0807 18:37:51.573994   12940 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0807 18:37:51.585414   12940 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0807 18:37:51.620322   12940 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0807 18:37:51.627244   12940 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0807 18:37:51.659349   12940 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0807 18:37:51.665862   12940 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0807 18:37:51.696890   12940 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0807 18:37:51.703623   12940 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0807 18:37:51.724486   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0807 18:37:51.775946   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0807 18:37:51.820104   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0807 18:37:51.866057   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0807 18:37:51.912900   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0807 18:37:51.968238   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0807 18:37:52.016755   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0807 18:37:52.059757   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0807 18:37:52.106704   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96602.pem --> /usr/share/ca-certificates/96602.pem (1708 bytes)
	I0807 18:37:52.156192   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0807 18:37:52.204932   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9660.pem --> /usr/share/ca-certificates/9660.pem (1338 bytes)
	I0807 18:37:52.252496   12940 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0807 18:37:52.288772   12940 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0807 18:37:52.318688   12940 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0807 18:37:52.354162   12940 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0807 18:37:52.385177   12940 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0807 18:37:52.415596   12940 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0807 18:37:52.446524   12940 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0807 18:37:52.489354   12940 ssh_runner.go:195] Run: openssl version
	I0807 18:37:52.507700   12940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/96602.pem && ln -fs /usr/share/ca-certificates/96602.pem /etc/ssl/certs/96602.pem"
	I0807 18:37:52.541699   12940 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/96602.pem
	I0807 18:37:52.552710   12940 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  7 17:50 /usr/share/ca-certificates/96602.pem
	I0807 18:37:52.565240   12940 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/96602.pem
	I0807 18:37:52.585753   12940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/96602.pem /etc/ssl/certs/3ec20f2e.0"
	I0807 18:37:52.616903   12940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0807 18:37:52.651089   12940 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0807 18:37:52.658120   12940 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  7 17:33 /usr/share/ca-certificates/minikubeCA.pem
	I0807 18:37:52.671306   12940 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0807 18:37:52.690559   12940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0807 18:37:52.721170   12940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9660.pem && ln -fs /usr/share/ca-certificates/9660.pem /etc/ssl/certs/9660.pem"
	I0807 18:37:52.757190   12940 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9660.pem
	I0807 18:37:52.764942   12940 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  7 17:50 /usr/share/ca-certificates/9660.pem
	I0807 18:37:52.778014   12940 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9660.pem
	I0807 18:37:52.797785   12940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9660.pem /etc/ssl/certs/51391683.0"
	I0807 18:37:52.827163   12940 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0807 18:37:52.833238   12940 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0807 18:37:52.833238   12940 kubeadm.go:934] updating node {m02 172.28.238.183 8443 v1.30.3 docker true true} ...
	I0807 18:37:52.833777   12940 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-766300-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.28.238.183
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-766300 Namespace:default APIServerHAVIP:172.28.239.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0807 18:37:52.833777   12940 kube-vip.go:115] generating kube-vip config ...
	I0807 18:37:52.845315   12940 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0807 18:37:52.869424   12940 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0807 18:37:52.869424   12940 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.28.239.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0807 18:37:52.880941   12940 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0807 18:37:52.899765   12940 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0807 18:37:52.911814   12940 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0807 18:37:52.932980   12940 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubeadm
	I0807 18:37:52.932980   12940 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubelet
	I0807 18:37:52.932980   12940 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubectl
	I0807 18:37:53.977760   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0807 18:37:53.989575   12940 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0807 18:37:53.991049   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0807 18:37:53.997350   12940 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0807 18:37:53.997350   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0807 18:37:54.011120   12940 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0807 18:37:54.067984   12940 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0807 18:37:54.067984   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0807 18:37:58.863746   12940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0807 18:37:58.892336   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0807 18:37:58.904621   12940 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0807 18:37:58.911138   12940 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0807 18:37:58.911138   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0807 18:37:59.582059   12940 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0807 18:37:59.600435   12940 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0807 18:37:59.634849   12940 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0807 18:37:59.669321   12940 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0807 18:37:59.718455   12940 ssh_runner.go:195] Run: grep 172.28.239.254	control-plane.minikube.internal$ /etc/hosts
	I0807 18:37:59.724801   12940 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.28.239.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0807 18:37:59.759709   12940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 18:37:59.961328   12940 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0807 18:37:59.992768   12940 host.go:66] Checking if "ha-766300" exists ...
	I0807 18:37:59.993394   12940 start.go:317] joinCluster: &{Name:ha-766300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-766300 Namespace:default APIServerHAVIP:172.28.239.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.224.88 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.238.183 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertEx
piration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 18:37:59.993394   12940 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0807 18:37:59.993394   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300 ).state
	I0807 18:38:02.202478   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:38:02.202478   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:38:02.202478   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300 ).networkadapters[0]).ipaddresses[0]
	I0807 18:38:04.843622   12940 main.go:141] libmachine: [stdout =====>] : 172.28.224.88
	
	I0807 18:38:04.843622   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:38:04.843970   12940 sshutil.go:53] new ssh client: &{IP:172.28.224.88 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-766300\id_rsa Username:docker}
	I0807 18:38:05.262440   12940 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0": (5.2689779s)
	I0807 18:38:05.262440   12940 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:172.28.238.183 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0807 18:38:05.262440   12940 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token l4io45.pb2zkt4q5s62d1mj --discovery-token-ca-cert-hash sha256:54111b4769238fc338d0511ba57c177f732a6d68c0b9cd0aa8cacbbf3c79643b --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-766300-m02 --control-plane --apiserver-advertise-address=172.28.238.183 --apiserver-bind-port=8443"
	I0807 18:38:50.109574   12940 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token l4io45.pb2zkt4q5s62d1mj --discovery-token-ca-cert-hash sha256:54111b4769238fc338d0511ba57c177f732a6d68c0b9cd0aa8cacbbf3c79643b --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-766300-m02 --control-plane --apiserver-advertise-address=172.28.238.183 --apiserver-bind-port=8443": (44.8465607s)
	I0807 18:38:50.109574   12940 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0807 18:38:50.921581   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-766300-m02 minikube.k8s.io/updated_at=2024_08_07T18_38_50_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e minikube.k8s.io/name=ha-766300 minikube.k8s.io/primary=false
	I0807 18:38:51.106053   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-766300-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0807 18:38:51.262899   12940 start.go:319] duration metric: took 51.2688489s to joinCluster
	I0807 18:38:51.262989   12940 start.go:235] Will wait 6m0s for node &{Name:m02 IP:172.28.238.183 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0807 18:38:51.263754   12940 config.go:182] Loaded profile config "ha-766300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 18:38:51.265982   12940 out.go:177] * Verifying Kubernetes components...
	I0807 18:38:51.282793   12940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 18:38:51.685191   12940 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0807 18:38:51.728046   12940 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0807 18:38:51.729177   12940 kapi.go:59] client config for ha-766300: &rest.Config{Host:"https://172.28.239.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-766300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-766300\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1da64c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0807 18:38:51.729177   12940 kubeadm.go:483] Overriding stale ClientConfig host https://172.28.239.254:8443 with https://172.28.224.88:8443
	I0807 18:38:51.730381   12940 node_ready.go:35] waiting up to 6m0s for node "ha-766300-m02" to be "Ready" ...
	I0807 18:38:51.730711   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:38:51.730745   12940 round_trippers.go:469] Request Headers:
	I0807 18:38:51.730745   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:38:51.730799   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:38:51.749534   12940 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0807 18:38:52.246251   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:38:52.246251   12940 round_trippers.go:469] Request Headers:
	I0807 18:38:52.246251   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:38:52.246251   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:38:52.253461   12940 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0807 18:38:52.739537   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:38:52.739537   12940 round_trippers.go:469] Request Headers:
	I0807 18:38:52.739537   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:38:52.739537   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:38:52.746118   12940 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0807 18:38:53.245058   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:38:53.245058   12940 round_trippers.go:469] Request Headers:
	I0807 18:38:53.245058   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:38:53.245058   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:38:53.250333   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:38:53.735157   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:38:53.735157   12940 round_trippers.go:469] Request Headers:
	I0807 18:38:53.735157   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:38:53.735157   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:38:53.739790   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:38:53.742073   12940 node_ready.go:53] node "ha-766300-m02" has status "Ready":"False"
	I0807 18:38:54.243095   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:38:54.243095   12940 round_trippers.go:469] Request Headers:
	I0807 18:38:54.243179   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:38:54.243179   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:38:54.247723   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:38:54.736141   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:38:54.736141   12940 round_trippers.go:469] Request Headers:
	I0807 18:38:54.736141   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:38:54.736141   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:38:54.742732   12940 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0807 18:38:55.242716   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:38:55.242716   12940 round_trippers.go:469] Request Headers:
	I0807 18:38:55.242716   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:38:55.242716   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:38:55.247041   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:38:55.746317   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:38:55.746317   12940 round_trippers.go:469] Request Headers:
	I0807 18:38:55.746317   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:38:55.746317   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:38:55.752566   12940 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0807 18:38:55.753470   12940 node_ready.go:53] node "ha-766300-m02" has status "Ready":"False"
	I0807 18:38:56.237912   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:38:56.238143   12940 round_trippers.go:469] Request Headers:
	I0807 18:38:56.238176   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:38:56.238176   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:38:56.379430   12940 round_trippers.go:574] Response Status: 200 OK in 141 milliseconds
	I0807 18:38:56.745325   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:38:56.745325   12940 round_trippers.go:469] Request Headers:
	I0807 18:38:56.745325   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:38:56.745325   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:38:56.749026   12940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:38:57.234342   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:38:57.234593   12940 round_trippers.go:469] Request Headers:
	I0807 18:38:57.234593   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:38:57.234593   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:38:57.242914   12940 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0807 18:38:57.739757   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:38:57.739842   12940 round_trippers.go:469] Request Headers:
	I0807 18:38:57.739842   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:38:57.739842   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:38:58.087924   12940 round_trippers.go:574] Response Status: 200 OK in 348 milliseconds
	I0807 18:38:58.088936   12940 node_ready.go:53] node "ha-766300-m02" has status "Ready":"False"
	I0807 18:38:58.242640   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:38:58.242640   12940 round_trippers.go:469] Request Headers:
	I0807 18:38:58.242640   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:38:58.242640   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:38:58.271534   12940 round_trippers.go:574] Response Status: 200 OK in 28 milliseconds
	I0807 18:38:58.732766   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:38:58.732766   12940 round_trippers.go:469] Request Headers:
	I0807 18:38:58.732766   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:38:58.732766   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:38:58.738355   12940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 18:38:59.236476   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:38:59.236557   12940 round_trippers.go:469] Request Headers:
	I0807 18:38:59.236557   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:38:59.236557   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:38:59.241327   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:38:59.743080   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:38:59.743289   12940 round_trippers.go:469] Request Headers:
	I0807 18:38:59.743289   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:38:59.743289   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:38:59.749049   12940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 18:39:00.232254   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:39:00.232254   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:00.232254   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:00.232254   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:00.240194   12940 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0807 18:39:00.241266   12940 node_ready.go:53] node "ha-766300-m02" has status "Ready":"False"
	I0807 18:39:00.733817   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:39:00.733917   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:00.733917   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:00.733917   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:00.740940   12940 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0807 18:39:01.233487   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:39:01.233487   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:01.233658   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:01.233658   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:01.241793   12940 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0807 18:39:01.746809   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:39:01.746809   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:01.746809   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:01.746809   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:01.751404   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:39:02.233065   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:39:02.233065   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:02.233189   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:02.233189   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:02.247897   12940 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0807 18:39:02.248899   12940 node_ready.go:53] node "ha-766300-m02" has status "Ready":"False"
	I0807 18:39:02.737635   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:39:02.737635   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:02.737635   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:02.737635   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:02.743233   12940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 18:39:03.231549   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:39:03.231645   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:03.231645   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:03.231645   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:03.236609   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:39:03.741192   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:39:03.741192   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:03.741192   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:03.741192   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:03.746253   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:39:04.244598   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:39:04.244683   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:04.244683   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:04.244683   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:04.249322   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:39:04.249916   12940 node_ready.go:53] node "ha-766300-m02" has status "Ready":"False"
	I0807 18:39:04.742767   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:39:04.742767   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:04.743122   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:04.743122   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:04.750066   12940 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0807 18:39:05.246560   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:39:05.246560   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:05.246560   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:05.246674   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:05.251801   12940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 18:39:05.735472   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:39:05.735472   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:05.735472   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:05.735472   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:05.749698   12940 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0807 18:39:06.238904   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:39:06.239208   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:06.239208   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:06.239330   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:06.246952   12940 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0807 18:39:06.740843   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:39:06.740843   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:06.740843   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:06.740843   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:06.747242   12940 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0807 18:39:06.748189   12940 node_ready.go:53] node "ha-766300-m02" has status "Ready":"False"
	I0807 18:39:07.244581   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:39:07.244581   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:07.244684   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:07.244684   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:07.249655   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:39:07.741975   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:39:07.741975   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:07.741975   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:07.741975   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:07.746745   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:39:08.244945   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:39:08.244945   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:08.244945   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:08.244945   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:08.250140   12940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 18:39:08.731367   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:39:08.731367   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:08.731610   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:08.731610   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:08.740747   12940 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0807 18:39:09.245973   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:39:09.245973   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:09.245973   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:09.245973   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:09.252610   12940 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0807 18:39:09.253412   12940 node_ready.go:53] node "ha-766300-m02" has status "Ready":"False"
	I0807 18:39:09.745943   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:39:09.745943   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:09.745943   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:09.745943   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:09.750846   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:39:10.231151   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:39:10.231255   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:10.231255   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:10.231255   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:10.235536   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:39:10.743625   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:39:10.743625   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:10.743625   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:10.743625   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:10.749213   12940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 18:39:11.245292   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:39:11.245292   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:11.245292   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:11.245292   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:11.249646   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:39:11.745246   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:39:11.745314   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:11.745314   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:11.745314   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:11.751051   12940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 18:39:11.752600   12940 node_ready.go:53] node "ha-766300-m02" has status "Ready":"False"
	I0807 18:39:12.231429   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:39:12.231486   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:12.231486   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:12.231486   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:12.238858   12940 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0807 18:39:12.734384   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:39:12.734384   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:12.734576   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:12.734576   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:12.739469   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:39:13.235743   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:39:13.235743   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:13.235743   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:13.235876   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:13.244191   12940 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0807 18:39:13.735580   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:39:13.735644   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:13.735644   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:13.735644   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:13.741413   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:39:14.238579   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:39:14.238579   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:14.238579   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:14.238579   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:14.243252   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:39:14.245271   12940 node_ready.go:49] node "ha-766300-m02" has status "Ready":"True"
	I0807 18:39:14.245271   12940 node_ready.go:38] duration metric: took 22.5145588s for node "ha-766300-m02" to be "Ready" ...
	I0807 18:39:14.245271   12940 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0807 18:39:14.245566   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods
	I0807 18:39:14.245586   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:14.245586   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:14.245586   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:14.253871   12940 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0807 18:39:14.263554   12940 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-9tjv6" in "kube-system" namespace to be "Ready" ...
	I0807 18:39:14.263554   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-9tjv6
	I0807 18:39:14.263554   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:14.263554   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:14.263554   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:14.267712   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:39:14.268836   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300
	I0807 18:39:14.269430   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:14.269700   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:14.269881   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:14.282860   12940 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0807 18:39:14.283691   12940 pod_ready.go:92] pod "coredns-7db6d8ff4d-9tjv6" in "kube-system" namespace has status "Ready":"True"
	I0807 18:39:14.283747   12940 pod_ready.go:81] duration metric: took 20.1928ms for pod "coredns-7db6d8ff4d-9tjv6" in "kube-system" namespace to be "Ready" ...
	I0807 18:39:14.283747   12940 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-fqjwg" in "kube-system" namespace to be "Ready" ...
	I0807 18:39:14.283892   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fqjwg
	I0807 18:39:14.283923   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:14.283923   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:14.283923   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:14.288664   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:39:14.290246   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300
	I0807 18:39:14.290300   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:14.290300   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:14.290352   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:14.296251   12940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 18:39:14.296978   12940 pod_ready.go:92] pod "coredns-7db6d8ff4d-fqjwg" in "kube-system" namespace has status "Ready":"True"
	I0807 18:39:14.296978   12940 pod_ready.go:81] duration metric: took 13.2299ms for pod "coredns-7db6d8ff4d-fqjwg" in "kube-system" namespace to be "Ready" ...
	I0807 18:39:14.296978   12940 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-766300" in "kube-system" namespace to be "Ready" ...
	I0807 18:39:14.296978   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/etcd-ha-766300
	I0807 18:39:14.296978   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:14.296978   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:14.296978   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:14.301437   12940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:39:14.302377   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300
	I0807 18:39:14.302477   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:14.302477   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:14.302477   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:14.305735   12940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:39:14.307023   12940 pod_ready.go:92] pod "etcd-ha-766300" in "kube-system" namespace has status "Ready":"True"
	I0807 18:39:14.307088   12940 pod_ready.go:81] duration metric: took 10.1102ms for pod "etcd-ha-766300" in "kube-system" namespace to be "Ready" ...
	I0807 18:39:14.307088   12940 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-766300-m02" in "kube-system" namespace to be "Ready" ...
	I0807 18:39:14.307207   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/etcd-ha-766300-m02
	I0807 18:39:14.307207   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:14.307267   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:14.307267   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:14.310704   12940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:39:14.311747   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:39:14.311747   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:14.311834   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:14.311834   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:14.315364   12940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:39:14.316926   12940 pod_ready.go:92] pod "etcd-ha-766300-m02" in "kube-system" namespace has status "Ready":"True"
	I0807 18:39:14.316926   12940 pod_ready.go:81] duration metric: took 9.8379ms for pod "etcd-ha-766300-m02" in "kube-system" namespace to be "Ready" ...
	I0807 18:39:14.316926   12940 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-766300" in "kube-system" namespace to be "Ready" ...
	I0807 18:39:14.440471   12940 request.go:629] Waited for 123.2095ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-766300
	I0807 18:39:14.440547   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-766300
	I0807 18:39:14.440547   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:14.440580   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:14.440580   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:14.446805   12940 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0807 18:39:14.645687   12940 request.go:629] Waited for 196.1117ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/nodes/ha-766300
	I0807 18:39:14.646102   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300
	I0807 18:39:14.646102   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:14.646102   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:14.646102   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:14.650903   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:39:14.651827   12940 pod_ready.go:92] pod "kube-apiserver-ha-766300" in "kube-system" namespace has status "Ready":"True"
	I0807 18:39:14.651929   12940 pod_ready.go:81] duration metric: took 334.999ms for pod "kube-apiserver-ha-766300" in "kube-system" namespace to be "Ready" ...
	I0807 18:39:14.651929   12940 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-766300-m02" in "kube-system" namespace to be "Ready" ...
	I0807 18:39:14.850199   12940 request.go:629] Waited for 197.6715ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-766300-m02
	I0807 18:39:14.850285   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-766300-m02
	I0807 18:39:14.850365   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:14.850365   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:14.850540   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:14.856076   12940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 18:39:15.052744   12940 request.go:629] Waited for 195.6101ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:39:15.052831   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:39:15.052831   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:15.052901   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:15.052901   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:15.063562   12940 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0807 18:39:15.064674   12940 pod_ready.go:92] pod "kube-apiserver-ha-766300-m02" in "kube-system" namespace has status "Ready":"True"
	I0807 18:39:15.064734   12940 pod_ready.go:81] duration metric: took 412.7994ms for pod "kube-apiserver-ha-766300-m02" in "kube-system" namespace to be "Ready" ...
	I0807 18:39:15.064787   12940 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-766300" in "kube-system" namespace to be "Ready" ...
	I0807 18:39:15.239176   12940 request.go:629] Waited for 174.167ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-766300
	I0807 18:39:15.239176   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-766300
	I0807 18:39:15.239176   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:15.239459   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:15.239459   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:15.244373   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:39:15.443606   12940 request.go:629] Waited for 197.7794ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/nodes/ha-766300
	I0807 18:39:15.443765   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300
	I0807 18:39:15.443765   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:15.443765   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:15.443765   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:15.448658   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:39:15.450479   12940 pod_ready.go:92] pod "kube-controller-manager-ha-766300" in "kube-system" namespace has status "Ready":"True"
	I0807 18:39:15.450555   12940 pod_ready.go:81] duration metric: took 385.7627ms for pod "kube-controller-manager-ha-766300" in "kube-system" namespace to be "Ready" ...
	I0807 18:39:15.450555   12940 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-766300-m02" in "kube-system" namespace to be "Ready" ...
	I0807 18:39:15.647228   12940 request.go:629] Waited for 196.4263ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-766300-m02
	I0807 18:39:15.647392   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-766300-m02
	I0807 18:39:15.647392   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:15.647392   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:15.647392   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:15.652244   12940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:39:15.851763   12940 request.go:629] Waited for 199.2773ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:39:15.851911   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:39:15.851911   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:15.851911   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:15.851967   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:15.856337   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:39:15.857900   12940 pod_ready.go:92] pod "kube-controller-manager-ha-766300-m02" in "kube-system" namespace has status "Ready":"True"
	I0807 18:39:15.857968   12940 pod_ready.go:81] duration metric: took 407.3359ms for pod "kube-controller-manager-ha-766300-m02" in "kube-system" namespace to be "Ready" ...
	I0807 18:39:15.857968   12940 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8v6vm" in "kube-system" namespace to be "Ready" ...
	I0807 18:39:16.040099   12940 request.go:629] Waited for 181.8172ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8v6vm
	I0807 18:39:16.040204   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8v6vm
	I0807 18:39:16.040204   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:16.040289   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:16.040289   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:16.046199   12940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 18:39:16.242532   12940 request.go:629] Waited for 194.9787ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:39:16.242873   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:39:16.242873   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:16.242908   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:16.242908   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:16.247518   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:39:16.249056   12940 pod_ready.go:92] pod "kube-proxy-8v6vm" in "kube-system" namespace has status "Ready":"True"
	I0807 18:39:16.249056   12940 pod_ready.go:81] duration metric: took 391.083ms for pod "kube-proxy-8v6vm" in "kube-system" namespace to be "Ready" ...
	I0807 18:39:16.249165   12940 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-d6ckx" in "kube-system" namespace to be "Ready" ...
	I0807 18:39:16.444988   12940 request.go:629] Waited for 195.4584ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d6ckx
	I0807 18:39:16.445208   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d6ckx
	I0807 18:39:16.445208   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:16.445285   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:16.445285   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:16.453483   12940 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0807 18:39:16.649838   12940 request.go:629] Waited for 195.4687ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/nodes/ha-766300
	I0807 18:39:16.650232   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300
	I0807 18:39:16.650232   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:16.650232   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:16.650232   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:16.655334   12940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 18:39:16.656316   12940 pod_ready.go:92] pod "kube-proxy-d6ckx" in "kube-system" namespace has status "Ready":"True"
	I0807 18:39:16.656316   12940 pod_ready.go:81] duration metric: took 407.1453ms for pod "kube-proxy-d6ckx" in "kube-system" namespace to be "Ready" ...
	I0807 18:39:16.656316   12940 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-766300" in "kube-system" namespace to be "Ready" ...
	I0807 18:39:16.854043   12940 request.go:629] Waited for 197.5552ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-766300
	I0807 18:39:16.854210   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-766300
	I0807 18:39:16.854210   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:16.854210   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:16.854351   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:16.858752   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:39:17.042139   12940 request.go:629] Waited for 182.0486ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/nodes/ha-766300
	I0807 18:39:17.042502   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300
	I0807 18:39:17.042665   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:17.042665   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:17.042665   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:17.048101   12940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 18:39:17.049567   12940 pod_ready.go:92] pod "kube-scheduler-ha-766300" in "kube-system" namespace has status "Ready":"True"
	I0807 18:39:17.049567   12940 pod_ready.go:81] duration metric: took 393.246ms for pod "kube-scheduler-ha-766300" in "kube-system" namespace to be "Ready" ...
	I0807 18:39:17.049636   12940 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-766300-m02" in "kube-system" namespace to be "Ready" ...
	I0807 18:39:17.246897   12940 request.go:629] Waited for 196.9126ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-766300-m02
	I0807 18:39:17.247148   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-766300-m02
	I0807 18:39:17.247185   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:17.247208   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:17.247230   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:17.251983   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:39:17.450137   12940 request.go:629] Waited for 196.2691ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:39:17.450254   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:39:17.450254   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:17.450254   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:17.450254   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:17.454709   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:39:17.456365   12940 pod_ready.go:92] pod "kube-scheduler-ha-766300-m02" in "kube-system" namespace has status "Ready":"True"
	I0807 18:39:17.456365   12940 pod_ready.go:81] duration metric: took 406.7239ms for pod "kube-scheduler-ha-766300-m02" in "kube-system" namespace to be "Ready" ...
	I0807 18:39:17.456365   12940 pod_ready.go:38] duration metric: took 3.2110527s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0807 18:39:17.456569   12940 api_server.go:52] waiting for apiserver process to appear ...
	I0807 18:39:17.469128   12940 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0807 18:39:17.498364   12940 api_server.go:72] duration metric: took 26.2350393s to wait for apiserver process to appear ...
	I0807 18:39:17.498499   12940 api_server.go:88] waiting for apiserver healthz status ...
	I0807 18:39:17.498568   12940 api_server.go:253] Checking apiserver healthz at https://172.28.224.88:8443/healthz ...
	I0807 18:39:17.508002   12940 api_server.go:279] https://172.28.224.88:8443/healthz returned 200:
	ok
	I0807 18:39:17.508997   12940 round_trippers.go:463] GET https://172.28.224.88:8443/version
	I0807 18:39:17.508997   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:17.508997   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:17.509092   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:17.510242   12940 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0807 18:39:17.511037   12940 api_server.go:141] control plane version: v1.30.3
	I0807 18:39:17.511119   12940 api_server.go:131] duration metric: took 12.5964ms to wait for apiserver health ...
	I0807 18:39:17.511119   12940 system_pods.go:43] waiting for kube-system pods to appear ...
	I0807 18:39:17.640308   12940 request.go:629] Waited for 128.8747ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods
	I0807 18:39:17.640308   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods
	I0807 18:39:17.640543   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:17.640543   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:17.640543   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:17.648305   12940 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0807 18:39:17.656376   12940 system_pods.go:59] 17 kube-system pods found
	I0807 18:39:17.656376   12940 system_pods.go:61] "coredns-7db6d8ff4d-9tjv6" [54967df0-ac2c-4024-8947-b4e972a4b59a] Running
	I0807 18:39:17.656376   12940 system_pods.go:61] "coredns-7db6d8ff4d-fqjwg" [cc54cc3e-f40c-43c2-ac25-25bd315c3dd9] Running
	I0807 18:39:17.656376   12940 system_pods.go:61] "etcd-ha-766300" [5c619c4a-4fd5-494f-bb7b-80754258d40a] Running
	I0807 18:39:17.656376   12940 system_pods.go:61] "etcd-ha-766300-m02" [97b2b2f2-ea73-4de0-86aa-4854386b8f71] Running
	I0807 18:39:17.656376   12940 system_pods.go:61] "kindnet-gh6wt" [35666307-476d-460d-af1d-23d3bae8aec2] Running
	I0807 18:39:17.656376   12940 system_pods.go:61] "kindnet-scfzz" [ad036ebf-9679-47a6-b8e0-f433a34f55cb] Running
	I0807 18:39:17.656376   12940 system_pods.go:61] "kube-apiserver-ha-766300" [d1f122ef-d89f-4a4f-8194-86e5e84faea4] Running
	I0807 18:39:17.656376   12940 system_pods.go:61] "kube-apiserver-ha-766300-m02" [249c438f-592d-47ba-bf0b-252bde32a27d] Running
	I0807 18:39:17.656376   12940 system_pods.go:61] "kube-controller-manager-ha-766300" [648bbb2b-06b4-487b-a9fa-c530a7ed5d11] Running
	I0807 18:39:17.656376   12940 system_pods.go:61] "kube-controller-manager-ha-766300-m02" [c8ab36c4-89ca-4519-8eaa-c27c00b78095] Running
	I0807 18:39:17.656376   12940 system_pods.go:61] "kube-proxy-8v6vm" [c6fa744a-fc9b-4da6-933a-866565e8318c] Running
	I0807 18:39:17.656376   12940 system_pods.go:61] "kube-proxy-d6ckx" [257858b0-6bb6-4bfb-9b5c-591fdb24929e] Running
	I0807 18:39:17.656376   12940 system_pods.go:61] "kube-scheduler-ha-766300" [1d44914f-67d1-4b8f-934c-273d21dc7d60] Running
	I0807 18:39:17.656376   12940 system_pods.go:61] "kube-scheduler-ha-766300-m02" [22b9a1c1-e369-4270-90f6-f3caa10e0705] Running
	I0807 18:39:17.656376   12940 system_pods.go:61] "kube-vip-ha-766300" [e2b31b5c-6e03-4e58-8cb4-10fc6869812b] Running
	I0807 18:39:17.656953   12940 system_pods.go:61] "kube-vip-ha-766300-m02" [0034d823-e21f-4be0-bbdb-09db13937fb7] Running
	I0807 18:39:17.657063   12940 system_pods.go:61] "storage-provisioner" [9a8a8ca1-bdd6-4ca8-a2d4-de3839223c9c] Running
	I0807 18:39:17.657082   12940 system_pods.go:74] duration metric: took 145.9424ms to wait for pod list to return data ...
	I0807 18:39:17.657116   12940 default_sa.go:34] waiting for default service account to be created ...
	I0807 18:39:17.844035   12940 request.go:629] Waited for 186.9162ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/namespaces/default/serviceaccounts
	I0807 18:39:17.844035   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/namespaces/default/serviceaccounts
	I0807 18:39:17.844035   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:17.844035   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:17.844035   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:17.850013   12940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 18:39:17.850701   12940 default_sa.go:45] found service account: "default"
	I0807 18:39:17.850760   12940 default_sa.go:55] duration metric: took 193.6416ms for default service account to be created ...
	I0807 18:39:17.850760   12940 system_pods.go:116] waiting for k8s-apps to be running ...
	I0807 18:39:18.046880   12940 request.go:629] Waited for 195.5777ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods
	I0807 18:39:18.046880   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods
	I0807 18:39:18.047035   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:18.047035   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:18.047066   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:18.058532   12940 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0807 18:39:18.065181   12940 system_pods.go:86] 17 kube-system pods found
	I0807 18:39:18.065181   12940 system_pods.go:89] "coredns-7db6d8ff4d-9tjv6" [54967df0-ac2c-4024-8947-b4e972a4b59a] Running
	I0807 18:39:18.065181   12940 system_pods.go:89] "coredns-7db6d8ff4d-fqjwg" [cc54cc3e-f40c-43c2-ac25-25bd315c3dd9] Running
	I0807 18:39:18.065181   12940 system_pods.go:89] "etcd-ha-766300" [5c619c4a-4fd5-494f-bb7b-80754258d40a] Running
	I0807 18:39:18.065181   12940 system_pods.go:89] "etcd-ha-766300-m02" [97b2b2f2-ea73-4de0-86aa-4854386b8f71] Running
	I0807 18:39:18.065181   12940 system_pods.go:89] "kindnet-gh6wt" [35666307-476d-460d-af1d-23d3bae8aec2] Running
	I0807 18:39:18.065181   12940 system_pods.go:89] "kindnet-scfzz" [ad036ebf-9679-47a6-b8e0-f433a34f55cb] Running
	I0807 18:39:18.065181   12940 system_pods.go:89] "kube-apiserver-ha-766300" [d1f122ef-d89f-4a4f-8194-86e5e84faea4] Running
	I0807 18:39:18.065181   12940 system_pods.go:89] "kube-apiserver-ha-766300-m02" [249c438f-592d-47ba-bf0b-252bde32a27d] Running
	I0807 18:39:18.065181   12940 system_pods.go:89] "kube-controller-manager-ha-766300" [648bbb2b-06b4-487b-a9fa-c530a7ed5d11] Running
	I0807 18:39:18.065181   12940 system_pods.go:89] "kube-controller-manager-ha-766300-m02" [c8ab36c4-89ca-4519-8eaa-c27c00b78095] Running
	I0807 18:39:18.065181   12940 system_pods.go:89] "kube-proxy-8v6vm" [c6fa744a-fc9b-4da6-933a-866565e8318c] Running
	I0807 18:39:18.065181   12940 system_pods.go:89] "kube-proxy-d6ckx" [257858b0-6bb6-4bfb-9b5c-591fdb24929e] Running
	I0807 18:39:18.065181   12940 system_pods.go:89] "kube-scheduler-ha-766300" [1d44914f-67d1-4b8f-934c-273d21dc7d60] Running
	I0807 18:39:18.065181   12940 system_pods.go:89] "kube-scheduler-ha-766300-m02" [22b9a1c1-e369-4270-90f6-f3caa10e0705] Running
	I0807 18:39:18.065181   12940 system_pods.go:89] "kube-vip-ha-766300" [e2b31b5c-6e03-4e58-8cb4-10fc6869812b] Running
	I0807 18:39:18.065181   12940 system_pods.go:89] "kube-vip-ha-766300-m02" [0034d823-e21f-4be0-bbdb-09db13937fb7] Running
	I0807 18:39:18.065181   12940 system_pods.go:89] "storage-provisioner" [9a8a8ca1-bdd6-4ca8-a2d4-de3839223c9c] Running
	I0807 18:39:18.065181   12940 system_pods.go:126] duration metric: took 214.4184ms to wait for k8s-apps to be running ...
	I0807 18:39:18.065181   12940 system_svc.go:44] waiting for kubelet service to be running ....
	I0807 18:39:18.079966   12940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0807 18:39:18.105345   12940 system_svc.go:56] duration metric: took 40.1628ms WaitForService to wait for kubelet
	I0807 18:39:18.106440   12940 kubeadm.go:582] duration metric: took 26.8430568s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0807 18:39:18.106440   12940 node_conditions.go:102] verifying NodePressure condition ...
	I0807 18:39:18.248371   12940 request.go:629] Waited for 141.637ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/nodes
	I0807 18:39:18.248371   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes
	I0807 18:39:18.248371   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:18.248596   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:18.248596   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:18.257609   12940 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0807 18:39:18.258646   12940 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0807 18:39:18.258646   12940 node_conditions.go:123] node cpu capacity is 2
	I0807 18:39:18.258646   12940 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0807 18:39:18.258646   12940 node_conditions.go:123] node cpu capacity is 2
	I0807 18:39:18.258646   12940 node_conditions.go:105] duration metric: took 152.1572ms to run NodePressure ...
	I0807 18:39:18.258646   12940 start.go:241] waiting for startup goroutines ...
	I0807 18:39:18.258646   12940 start.go:255] writing updated cluster config ...
	I0807 18:39:18.263168   12940 out.go:177] 
	I0807 18:39:18.277612   12940 config.go:182] Loaded profile config "ha-766300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 18:39:18.277612   12940 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\config.json ...
	I0807 18:39:18.284480   12940 out.go:177] * Starting "ha-766300-m03" control-plane node in "ha-766300" cluster
	I0807 18:39:18.287016   12940 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0807 18:39:18.287080   12940 cache.go:56] Caching tarball of preloaded images
	I0807 18:39:18.287327   12940 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0807 18:39:18.287327   12940 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0807 18:39:18.287933   12940 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\config.json ...
	I0807 18:39:18.293848   12940 start.go:360] acquireMachinesLock for ha-766300-m03: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 18:39:18.294016   12940 start.go:364] duration metric: took 135.4µs to acquireMachinesLock for "ha-766300-m03"
	I0807 18:39:18.294073   12940 start.go:93] Provisioning new machine with config: &{Name:ha-766300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.3 ClusterName:ha-766300 Namespace:default APIServerHAVIP:172.28.239.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.224.88 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.238.183 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0807 18:39:18.294073   12940 start.go:125] createHost starting for "m03" (driver="hyperv")
	I0807 18:39:18.297706   12940 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0807 18:39:18.297890   12940 start.go:159] libmachine.API.Create for "ha-766300" (driver="hyperv")
	I0807 18:39:18.297890   12940 client.go:168] LocalClient.Create starting
	I0807 18:39:18.297890   12940 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0807 18:39:18.298620   12940 main.go:141] libmachine: Decoding PEM data...
	I0807 18:39:18.298620   12940 main.go:141] libmachine: Parsing certificate...
	I0807 18:39:18.298620   12940 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0807 18:39:18.298620   12940 main.go:141] libmachine: Decoding PEM data...
	I0807 18:39:18.298620   12940 main.go:141] libmachine: Parsing certificate...
	I0807 18:39:18.299265   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0807 18:39:20.242074   12940 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0807 18:39:20.242074   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:39:20.243126   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0807 18:39:22.026163   12940 main.go:141] libmachine: [stdout =====>] : False
	
	I0807 18:39:22.026163   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:39:22.026163   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0807 18:39:23.553099   12940 main.go:141] libmachine: [stdout =====>] : True
	
	I0807 18:39:23.553612   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:39:23.553612   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0807 18:39:27.365238   12940 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0807 18:39:27.365628   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:39:27.368044   12940 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0807 18:39:27.812541   12940 main.go:141] libmachine: Creating SSH key...
	I0807 18:39:27.960062   12940 main.go:141] libmachine: Creating VM...
	I0807 18:39:27.960062   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0807 18:39:31.015626   12940 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0807 18:39:31.015626   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:39:31.015626   12940 main.go:141] libmachine: Using switch "Default Switch"
	I0807 18:39:31.015778   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0807 18:39:32.860744   12940 main.go:141] libmachine: [stdout =====>] : True
	
	I0807 18:39:32.860744   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:39:32.860877   12940 main.go:141] libmachine: Creating VHD
	I0807 18:39:32.860877   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-766300-m03\fixed.vhd' -SizeBytes 10MB -Fixed
	I0807 18:39:36.732406   12940 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-766300-m03\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 758DB308-813F-4953-BDDD-8289B54F244C
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0807 18:39:36.732512   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:39:36.732512   12940 main.go:141] libmachine: Writing magic tar header
	I0807 18:39:36.732612   12940 main.go:141] libmachine: Writing SSH key tar header
	I0807 18:39:36.743491   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-766300-m03\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-766300-m03\disk.vhd' -VHDType Dynamic -DeleteSource
	I0807 18:39:40.027193   12940 main.go:141] libmachine: [stdout =====>] : 
	I0807 18:39:40.028165   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:39:40.028165   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-766300-m03\disk.vhd' -SizeBytes 20000MB
	I0807 18:39:42.633400   12940 main.go:141] libmachine: [stdout =====>] : 
	I0807 18:39:42.634114   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:39:42.634218   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-766300-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-766300-m03' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0807 18:39:46.400711   12940 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-766300-m03 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0807 18:39:46.400711   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:39:46.401465   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-766300-m03 -DynamicMemoryEnabled $false
	I0807 18:39:48.729336   12940 main.go:141] libmachine: [stdout =====>] : 
	I0807 18:39:48.729336   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:39:48.730024   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-766300-m03 -Count 2
	I0807 18:39:50.976387   12940 main.go:141] libmachine: [stdout =====>] : 
	I0807 18:39:50.976479   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:39:50.976479   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-766300-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-766300-m03\boot2docker.iso'
	I0807 18:39:53.686717   12940 main.go:141] libmachine: [stdout =====>] : 
	I0807 18:39:53.687223   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:39:53.687223   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-766300-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-766300-m03\disk.vhd'
	I0807 18:39:56.427762   12940 main.go:141] libmachine: [stdout =====>] : 
	I0807 18:39:56.427762   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:39:56.427762   12940 main.go:141] libmachine: Starting VM...
	I0807 18:39:56.427762   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-766300-m03
	I0807 18:39:59.677372   12940 main.go:141] libmachine: [stdout =====>] : 
	I0807 18:39:59.677372   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:39:59.677372   12940 main.go:141] libmachine: Waiting for host to start...
	I0807 18:39:59.677372   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m03 ).state
	I0807 18:40:02.113855   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:40:02.114549   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:40:02.114608   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300-m03 ).networkadapters[0]).ipaddresses[0]
	I0807 18:40:04.729194   12940 main.go:141] libmachine: [stdout =====>] : 
	I0807 18:40:04.729194   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:40:05.740197   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m03 ).state
	I0807 18:40:08.098936   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:40:08.099334   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:40:08.099334   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300-m03 ).networkadapters[0]).ipaddresses[0]
	I0807 18:40:10.745011   12940 main.go:141] libmachine: [stdout =====>] : 
	I0807 18:40:10.745011   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:40:11.755073   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m03 ).state
	I0807 18:40:14.044294   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:40:14.044492   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:40:14.044492   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300-m03 ).networkadapters[0]).ipaddresses[0]
	I0807 18:40:16.645767   12940 main.go:141] libmachine: [stdout =====>] : 
	I0807 18:40:16.645767   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:40:17.654128   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m03 ).state
	I0807 18:40:19.981889   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:40:19.981889   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:40:19.981889   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300-m03 ).networkadapters[0]).ipaddresses[0]
	I0807 18:40:22.687744   12940 main.go:141] libmachine: [stdout =====>] : 
	I0807 18:40:22.688318   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:40:23.689224   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m03 ).state
	I0807 18:40:26.061254   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:40:26.061459   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:40:26.061632   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300-m03 ).networkadapters[0]).ipaddresses[0]
	I0807 18:40:28.734016   12940 main.go:141] libmachine: [stdout =====>] : 172.28.233.130
	
	I0807 18:40:28.734016   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:40:28.735101   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m03 ).state
	I0807 18:40:30.981416   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:40:30.981709   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:40:30.981709   12940 machine.go:94] provisionDockerMachine start ...
	I0807 18:40:30.981709   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m03 ).state
	I0807 18:40:33.265775   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:40:33.265775   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:40:33.266390   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300-m03 ).networkadapters[0]).ipaddresses[0]
	I0807 18:40:35.917231   12940 main.go:141] libmachine: [stdout =====>] : 172.28.233.130
	
	I0807 18:40:35.917231   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:40:35.923650   12940 main.go:141] libmachine: Using SSH client type: native
	I0807 18:40:35.924199   12940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.233.130 22 <nil> <nil>}
	I0807 18:40:35.924199   12940 main.go:141] libmachine: About to run SSH command:
	hostname
	I0807 18:40:36.047542   12940 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0807 18:40:36.047542   12940 buildroot.go:166] provisioning hostname "ha-766300-m03"
	I0807 18:40:36.047542   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m03 ).state
	I0807 18:40:38.283780   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:40:38.284504   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:40:38.284504   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300-m03 ).networkadapters[0]).ipaddresses[0]
	I0807 18:40:40.979089   12940 main.go:141] libmachine: [stdout =====>] : 172.28.233.130
	
	I0807 18:40:40.979339   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:40:40.985107   12940 main.go:141] libmachine: Using SSH client type: native
	I0807 18:40:40.985649   12940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.233.130 22 <nil> <nil>}
	I0807 18:40:40.985649   12940 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-766300-m03 && echo "ha-766300-m03" | sudo tee /etc/hostname
	I0807 18:40:41.146157   12940 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-766300-m03
	
	I0807 18:40:41.146264   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m03 ).state
	I0807 18:40:43.404385   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:40:43.404764   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:40:43.404837   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300-m03 ).networkadapters[0]).ipaddresses[0]
	I0807 18:40:46.071087   12940 main.go:141] libmachine: [stdout =====>] : 172.28.233.130
	
	I0807 18:40:46.071087   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:40:46.076990   12940 main.go:141] libmachine: Using SSH client type: native
	I0807 18:40:46.077372   12940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.233.130 22 <nil> <nil>}
	I0807 18:40:46.077912   12940 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-766300-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-766300-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-766300-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0807 18:40:46.221352   12940 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0807 18:40:46.221352   12940 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0807 18:40:46.221426   12940 buildroot.go:174] setting up certificates
	I0807 18:40:46.221426   12940 provision.go:84] configureAuth start
	I0807 18:40:46.221552   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m03 ).state
	I0807 18:40:48.450906   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:40:48.450906   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:40:48.451449   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300-m03 ).networkadapters[0]).ipaddresses[0]
	I0807 18:40:51.126252   12940 main.go:141] libmachine: [stdout =====>] : 172.28.233.130
	
	I0807 18:40:51.126252   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:40:51.127193   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m03 ).state
	I0807 18:40:53.346659   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:40:53.346659   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:40:53.346659   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300-m03 ).networkadapters[0]).ipaddresses[0]
	I0807 18:40:56.030723   12940 main.go:141] libmachine: [stdout =====>] : 172.28.233.130
	
	I0807 18:40:56.030723   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:40:56.030723   12940 provision.go:143] copyHostCerts
	I0807 18:40:56.031878   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0807 18:40:56.032236   12940 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0807 18:40:56.032336   12940 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0807 18:40:56.032697   12940 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0807 18:40:56.033587   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0807 18:40:56.033587   12940 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0807 18:40:56.033587   12940 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0807 18:40:56.034462   12940 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0807 18:40:56.035619   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0807 18:40:56.035807   12940 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0807 18:40:56.035807   12940 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0807 18:40:56.035807   12940 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0807 18:40:56.037361   12940 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-766300-m03 san=[127.0.0.1 172.28.233.130 ha-766300-m03 localhost minikube]
	I0807 18:40:56.304335   12940 provision.go:177] copyRemoteCerts
	I0807 18:40:56.317330   12940 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0807 18:40:56.317330   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m03 ).state
	I0807 18:40:58.590497   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:40:58.590497   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:40:58.590966   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300-m03 ).networkadapters[0]).ipaddresses[0]
	I0807 18:41:01.258076   12940 main.go:141] libmachine: [stdout =====>] : 172.28.233.130
	
	I0807 18:41:01.258076   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:41:01.258994   12940 sshutil.go:53] new ssh client: &{IP:172.28.233.130 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-766300-m03\id_rsa Username:docker}
	I0807 18:41:01.368132   12940 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.0505801s)
	I0807 18:41:01.368132   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0807 18:41:01.368751   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0807 18:41:01.416213   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0807 18:41:01.416514   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0807 18:41:01.464664   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0807 18:41:01.465643   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0807 18:41:01.514396   12940 provision.go:87] duration metric: took 15.2927756s to configureAuth
	I0807 18:41:01.514396   12940 buildroot.go:189] setting minikube options for container-runtime
	I0807 18:41:01.515102   12940 config.go:182] Loaded profile config "ha-766300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 18:41:01.515238   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m03 ).state
	I0807 18:41:03.726417   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:41:03.727058   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:41:03.727411   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300-m03 ).networkadapters[0]).ipaddresses[0]
	I0807 18:41:06.384761   12940 main.go:141] libmachine: [stdout =====>] : 172.28.233.130
	
	I0807 18:41:06.384761   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:41:06.391660   12940 main.go:141] libmachine: Using SSH client type: native
	I0807 18:41:06.392205   12940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.233.130 22 <nil> <nil>}
	I0807 18:41:06.392205   12940 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0807 18:41:06.511802   12940 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0807 18:41:06.511878   12940 buildroot.go:70] root file system type: tmpfs
	I0807 18:41:06.512223   12940 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0807 18:41:06.512282   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m03 ).state
	I0807 18:41:08.750415   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:41:08.750415   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:41:08.751096   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300-m03 ).networkadapters[0]).ipaddresses[0]
	I0807 18:41:11.408510   12940 main.go:141] libmachine: [stdout =====>] : 172.28.233.130
	
	I0807 18:41:11.408510   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:41:11.414515   12940 main.go:141] libmachine: Using SSH client type: native
	I0807 18:41:11.415201   12940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.233.130 22 <nil> <nil>}
	I0807 18:41:11.415201   12940 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.28.224.88"
	Environment="NO_PROXY=172.28.224.88,172.28.238.183"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0807 18:41:11.560232   12940 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.28.224.88
	Environment=NO_PROXY=172.28.224.88,172.28.238.183
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0807 18:41:11.560386   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m03 ).state
	I0807 18:41:13.816422   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:41:13.816990   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:41:13.817061   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300-m03 ).networkadapters[0]).ipaddresses[0]
	I0807 18:41:16.514507   12940 main.go:141] libmachine: [stdout =====>] : 172.28.233.130
	
	I0807 18:41:16.514507   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:41:16.521263   12940 main.go:141] libmachine: Using SSH client type: native
	I0807 18:41:16.521844   12940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.233.130 22 <nil> <nil>}
	I0807 18:41:16.521883   12940 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0807 18:41:18.837736   12940 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0807 18:41:18.837736   12940 machine.go:97] duration metric: took 47.8554186s to provisionDockerMachine
	I0807 18:41:18.837736   12940 client.go:171] duration metric: took 2m0.5383074s to LocalClient.Create
	I0807 18:41:18.837736   12940 start.go:167] duration metric: took 2m0.5383074s to libmachine.API.Create "ha-766300"
	I0807 18:41:18.837736   12940 start.go:293] postStartSetup for "ha-766300-m03" (driver="hyperv")
	I0807 18:41:18.837736   12940 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0807 18:41:18.851705   12940 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0807 18:41:18.851705   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m03 ).state
	I0807 18:41:21.070549   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:41:21.070593   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:41:21.070681   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300-m03 ).networkadapters[0]).ipaddresses[0]
	I0807 18:41:23.712527   12940 main.go:141] libmachine: [stdout =====>] : 172.28.233.130
	
	I0807 18:41:23.712527   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:41:23.712527   12940 sshutil.go:53] new ssh client: &{IP:172.28.233.130 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-766300-m03\id_rsa Username:docker}
	I0807 18:41:23.812678   12940 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9606641s)
	I0807 18:41:23.824635   12940 ssh_runner.go:195] Run: cat /etc/os-release
	I0807 18:41:23.831791   12940 info.go:137] Remote host: Buildroot 2023.02.9
	I0807 18:41:23.831866   12940 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0807 18:41:23.832339   12940 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0807 18:41:23.833499   12940 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96602.pem -> 96602.pem in /etc/ssl/certs
	I0807 18:41:23.833667   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96602.pem -> /etc/ssl/certs/96602.pem
	I0807 18:41:23.846071   12940 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0807 18:41:23.863341   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96602.pem --> /etc/ssl/certs/96602.pem (1708 bytes)
	I0807 18:41:23.910437   12940 start.go:296] duration metric: took 5.0726367s for postStartSetup
	I0807 18:41:23.913180   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m03 ).state
	I0807 18:41:26.140068   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:41:26.140275   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:41:26.140275   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300-m03 ).networkadapters[0]).ipaddresses[0]
	I0807 18:41:28.779229   12940 main.go:141] libmachine: [stdout =====>] : 172.28.233.130
	
	I0807 18:41:28.779229   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:41:28.779507   12940 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\config.json ...
	I0807 18:41:28.782142   12940 start.go:128] duration metric: took 2m10.486404s to createHost
	I0807 18:41:28.782142   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m03 ).state
	I0807 18:41:30.990595   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:41:30.990595   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:41:30.991298   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300-m03 ).networkadapters[0]).ipaddresses[0]
	I0807 18:41:33.628034   12940 main.go:141] libmachine: [stdout =====>] : 172.28.233.130
	
	I0807 18:41:33.628165   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:41:33.636700   12940 main.go:141] libmachine: Using SSH client type: native
	I0807 18:41:33.637633   12940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.233.130 22 <nil> <nil>}
	I0807 18:41:33.637633   12940 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0807 18:41:33.758348   12940 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723056093.771913301
	
	I0807 18:41:33.758348   12940 fix.go:216] guest clock: 1723056093.771913301
	I0807 18:41:33.758348   12940 fix.go:229] Guest: 2024-08-07 18:41:33.771913301 +0000 UTC Remote: 2024-08-07 18:41:28.7821423 +0000 UTC m=+597.777960501 (delta=4.989771001s)
	I0807 18:41:33.758348   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m03 ).state
	I0807 18:41:35.964326   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:41:35.964326   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:41:35.964855   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300-m03 ).networkadapters[0]).ipaddresses[0]
	I0807 18:41:38.598224   12940 main.go:141] libmachine: [stdout =====>] : 172.28.233.130
	
	I0807 18:41:38.598815   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:41:38.604663   12940 main.go:141] libmachine: Using SSH client type: native
	I0807 18:41:38.604824   12940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.233.130 22 <nil> <nil>}
	I0807 18:41:38.604824   12940 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1723056093
	I0807 18:41:38.738848   12940 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Aug  7 18:41:33 UTC 2024
	
	I0807 18:41:38.738888   12940 fix.go:236] clock set: Wed Aug  7 18:41:33 UTC 2024
	 (err=<nil>)
	I0807 18:41:38.738888   12940 start.go:83] releasing machines lock for "ha-766300-m03", held for 2m20.4430236s
	I0807 18:41:38.738960   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m03 ).state
	I0807 18:41:40.962560   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:41:40.963256   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:41:40.963256   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300-m03 ).networkadapters[0]).ipaddresses[0]
	I0807 18:41:43.565131   12940 main.go:141] libmachine: [stdout =====>] : 172.28.233.130
	
	I0807 18:41:43.565364   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:41:43.569547   12940 out.go:177] * Found network options:
	I0807 18:41:43.572411   12940 out.go:177]   - NO_PROXY=172.28.224.88,172.28.238.183
	W0807 18:41:43.574907   12940 proxy.go:119] fail to check proxy env: Error ip not in block
	W0807 18:41:43.574907   12940 proxy.go:119] fail to check proxy env: Error ip not in block
	I0807 18:41:43.577946   12940 out.go:177]   - NO_PROXY=172.28.224.88,172.28.238.183
	W0807 18:41:43.582494   12940 proxy.go:119] fail to check proxy env: Error ip not in block
	W0807 18:41:43.582494   12940 proxy.go:119] fail to check proxy env: Error ip not in block
	W0807 18:41:43.583517   12940 proxy.go:119] fail to check proxy env: Error ip not in block
	W0807 18:41:43.583517   12940 proxy.go:119] fail to check proxy env: Error ip not in block
	I0807 18:41:43.586175   12940 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0807 18:41:43.586175   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m03 ).state
	I0807 18:41:43.596180   12940 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0807 18:41:43.596180   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m03 ).state
	I0807 18:41:45.909854   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:41:45.909854   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:41:45.910452   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300-m03 ).networkadapters[0]).ipaddresses[0]
	I0807 18:41:45.931221   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:41:45.931221   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:41:45.932096   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300-m03 ).networkadapters[0]).ipaddresses[0]
	I0807 18:41:48.749727   12940 main.go:141] libmachine: [stdout =====>] : 172.28.233.130
	
	I0807 18:41:48.749727   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:41:48.750540   12940 sshutil.go:53] new ssh client: &{IP:172.28.233.130 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-766300-m03\id_rsa Username:docker}
	I0807 18:41:48.772477   12940 main.go:141] libmachine: [stdout =====>] : 172.28.233.130
	
	I0807 18:41:48.772477   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:41:48.772477   12940 sshutil.go:53] new ssh client: &{IP:172.28.233.130 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-766300-m03\id_rsa Username:docker}
	I0807 18:41:48.846070   12940 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.2598278s)
	W0807 18:41:48.846183   12940 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0807 18:41:48.866316   12940 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.2700695s)
	W0807 18:41:48.866316   12940 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0807 18:41:48.878215   12940 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0807 18:41:48.908393   12940 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0807 18:41:48.908470   12940 start.go:495] detecting cgroup driver to use...
	I0807 18:41:48.908702   12940 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0807 18:41:48.959132   12940 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	W0807 18:41:48.960131   12940 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0807 18:41:48.960131   12940 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0807 18:41:48.991939   12940 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0807 18:41:49.011917   12940 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0807 18:41:49.022903   12940 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0807 18:41:49.056681   12940 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0807 18:41:49.092850   12940 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0807 18:41:49.126506   12940 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0807 18:41:49.161485   12940 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0807 18:41:49.198163   12940 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0807 18:41:49.229702   12940 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0807 18:41:49.260690   12940 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0807 18:41:49.290775   12940 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0807 18:41:49.320887   12940 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0807 18:41:49.349897   12940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 18:41:49.553530   12940 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0807 18:41:49.588612   12940 start.go:495] detecting cgroup driver to use...
	I0807 18:41:49.601299   12940 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0807 18:41:49.637151   12940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0807 18:41:49.667148   12940 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0807 18:41:49.714774   12940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0807 18:41:49.752437   12940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0807 18:41:49.788357   12940 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0807 18:41:49.851923   12940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0807 18:41:49.879736   12940 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0807 18:41:49.927930   12940 ssh_runner.go:195] Run: which cri-dockerd
	I0807 18:41:49.946230   12940 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0807 18:41:49.965049   12940 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0807 18:41:50.009091   12940 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0807 18:41:50.219111   12940 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0807 18:41:50.424239   12940 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0807 18:41:50.424320   12940 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0807 18:41:50.469832   12940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 18:41:50.667299   12940 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0807 18:41:53.271811   12940 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6044788s)
	I0807 18:41:53.284526   12940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0807 18:41:53.323554   12940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0807 18:41:53.357557   12940 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0807 18:41:53.569550   12940 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0807 18:41:53.780533   12940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 18:41:53.976111   12940 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0807 18:41:54.020760   12940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0807 18:41:54.063117   12940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 18:41:54.279906   12940 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0807 18:41:54.397317   12940 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0807 18:41:54.409022   12940 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0807 18:41:54.418510   12940 start.go:563] Will wait 60s for crictl version
	I0807 18:41:54.431124   12940 ssh_runner.go:195] Run: which crictl
	I0807 18:41:54.448098   12940 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0807 18:41:54.500125   12940 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.1
	RuntimeApiVersion:  v1
	I0807 18:41:54.509857   12940 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0807 18:41:54.552564   12940 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0807 18:41:54.586599   12940 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...
	I0807 18:41:54.589564   12940 out.go:177]   - env NO_PROXY=172.28.224.88
	I0807 18:41:54.592573   12940 out.go:177]   - env NO_PROXY=172.28.224.88,172.28.238.183
	I0807 18:41:54.594565   12940 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0807 18:41:54.598563   12940 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0807 18:41:54.598563   12940 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0807 18:41:54.598563   12940 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0807 18:41:54.598563   12940 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:f6:3a:6a Flags:up|broadcast|multicast|running}
	I0807 18:41:54.601606   12940 ip.go:210] interface addr: fe80::e7eb:b592:d388:ff99/64
	I0807 18:41:54.601606   12940 ip.go:210] interface addr: 172.28.224.1/20
	I0807 18:41:54.613595   12940 ssh_runner.go:195] Run: grep 172.28.224.1	host.minikube.internal$ /etc/hosts
	I0807 18:41:54.619564   12940 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.28.224.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0807 18:41:54.648751   12940 mustload.go:65] Loading cluster: ha-766300
	I0807 18:41:54.649779   12940 config.go:182] Loaded profile config "ha-766300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 18:41:54.650760   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300 ).state
	I0807 18:41:56.877675   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:41:56.878425   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:41:56.878425   12940 host.go:66] Checking if "ha-766300" exists ...
	I0807 18:41:56.879052   12940 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300 for IP: 172.28.233.130
	I0807 18:41:56.879111   12940 certs.go:194] generating shared ca certs ...
	I0807 18:41:56.879111   12940 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 18:41:56.879643   12940 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0807 18:41:56.879839   12940 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0807 18:41:56.879839   12940 certs.go:256] generating profile certs ...
	I0807 18:41:56.881110   12940 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\client.key
	I0807 18:41:56.881352   12940 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\apiserver.key.89c951d6
	I0807 18:41:56.881503   12940 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\apiserver.crt.89c951d6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.28.224.88 172.28.238.183 172.28.233.130 172.28.239.254]
	I0807 18:41:57.100497   12940 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\apiserver.crt.89c951d6 ...
	I0807 18:41:57.100497   12940 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\apiserver.crt.89c951d6: {Name:mk78c55a8688360f78348ea745a48b0e73bc659e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 18:41:57.102066   12940 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\apiserver.key.89c951d6 ...
	I0807 18:41:57.102066   12940 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\apiserver.key.89c951d6: {Name:mk8999dda82f8a430006c9bcf70b2406d4ab194a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 18:41:57.102613   12940 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\apiserver.crt.89c951d6 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\apiserver.crt
	I0807 18:41:57.117018   12940 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\apiserver.key.89c951d6 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\apiserver.key
	I0807 18:41:57.119448   12940 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\proxy-client.key
	I0807 18:41:57.119568   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0807 18:41:57.119740   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0807 18:41:57.119926   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0807 18:41:57.120193   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0807 18:41:57.120193   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0807 18:41:57.120193   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0807 18:41:57.120193   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0807 18:41:57.120846   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0807 18:41:57.121424   12940 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9660.pem (1338 bytes)
	W0807 18:41:57.121615   12940 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9660_empty.pem, impossibly tiny 0 bytes
	I0807 18:41:57.121838   12940 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0807 18:41:57.122147   12940 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0807 18:41:57.122147   12940 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0807 18:41:57.122899   12940 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0807 18:41:57.123223   12940 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96602.pem (1708 bytes)
	I0807 18:41:57.123223   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96602.pem -> /usr/share/ca-certificates/96602.pem
	I0807 18:41:57.123757   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0807 18:41:57.123937   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9660.pem -> /usr/share/ca-certificates/9660.pem
	I0807 18:41:57.124106   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300 ).state
	I0807 18:41:59.379705   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:41:59.380670   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:41:59.380714   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300 ).networkadapters[0]).ipaddresses[0]
	I0807 18:42:02.111754   12940 main.go:141] libmachine: [stdout =====>] : 172.28.224.88
	
	I0807 18:42:02.111754   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:42:02.112818   12940 sshutil.go:53] new ssh client: &{IP:172.28.224.88 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-766300\id_rsa Username:docker}
	I0807 18:42:02.215971   12940 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0807 18:42:02.223416   12940 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0807 18:42:02.257708   12940 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0807 18:42:02.267969   12940 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0807 18:42:02.303615   12940 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0807 18:42:02.311065   12940 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0807 18:42:02.344282   12940 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0807 18:42:02.351203   12940 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0807 18:42:02.385025   12940 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0807 18:42:02.392411   12940 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0807 18:42:02.426401   12940 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0807 18:42:02.433489   12940 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0807 18:42:02.456579   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0807 18:42:02.507958   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0807 18:42:02.557479   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0807 18:42:02.607728   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0807 18:42:02.655739   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0807 18:42:02.703156   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0807 18:42:02.750995   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0807 18:42:02.799682   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0807 18:42:02.849651   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96602.pem --> /usr/share/ca-certificates/96602.pem (1708 bytes)
	I0807 18:42:02.899640   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0807 18:42:02.952149   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9660.pem --> /usr/share/ca-certificates/9660.pem (1338 bytes)
	I0807 18:42:03.000639   12940 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0807 18:42:03.034048   12940 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0807 18:42:03.067576   12940 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0807 18:42:03.101843   12940 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0807 18:42:03.136591   12940 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0807 18:42:03.169419   12940 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0807 18:42:03.202201   12940 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0807 18:42:03.255497   12940 ssh_runner.go:195] Run: openssl version
	I0807 18:42:03.276228   12940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9660.pem && ln -fs /usr/share/ca-certificates/9660.pem /etc/ssl/certs/9660.pem"
	I0807 18:42:03.309250   12940 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9660.pem
	I0807 18:42:03.316562   12940 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  7 17:50 /usr/share/ca-certificates/9660.pem
	I0807 18:42:03.328422   12940 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9660.pem
	I0807 18:42:03.350679   12940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9660.pem /etc/ssl/certs/51391683.0"
	I0807 18:42:03.383321   12940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/96602.pem && ln -fs /usr/share/ca-certificates/96602.pem /etc/ssl/certs/96602.pem"
	I0807 18:42:03.415323   12940 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/96602.pem
	I0807 18:42:03.422342   12940 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  7 17:50 /usr/share/ca-certificates/96602.pem
	I0807 18:42:03.434519   12940 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/96602.pem
	I0807 18:42:03.457370   12940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/96602.pem /etc/ssl/certs/3ec20f2e.0"
	I0807 18:42:03.491621   12940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0807 18:42:03.537572   12940 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0807 18:42:03.545217   12940 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  7 17:33 /usr/share/ca-certificates/minikubeCA.pem
	I0807 18:42:03.558124   12940 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0807 18:42:03.579280   12940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0807 18:42:03.609316   12940 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0807 18:42:03.617695   12940 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0807 18:42:03.617987   12940 kubeadm.go:934] updating node {m03 172.28.233.130 8443 v1.30.3 docker true true} ...
	I0807 18:42:03.618150   12940 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-766300-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.28.233.130
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-766300 Namespace:default APIServerHAVIP:172.28.239.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0807 18:42:03.618235   12940 kube-vip.go:115] generating kube-vip config ...
	I0807 18:42:03.631186   12940 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0807 18:42:03.659222   12940 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0807 18:42:03.659963   12940 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.28.239.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0807 18:42:03.672174   12940 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0807 18:42:03.688196   12940 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0807 18:42:03.700204   12940 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0807 18:42:03.719724   12940 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256
	I0807 18:42:03.719724   12940 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256
	I0807 18:42:03.719724   12940 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0807 18:42:03.720486   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0807 18:42:03.720486   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0807 18:42:03.735682   12940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0807 18:42:03.736696   12940 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0807 18:42:03.737691   12940 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0807 18:42:03.759833   12940 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0807 18:42:03.759833   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0807 18:42:03.759833   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0807 18:42:03.759833   12940 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0807 18:42:03.759833   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0807 18:42:03.776560   12940 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0807 18:42:03.837210   12940 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0807 18:42:03.837210   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0807 18:42:05.155242   12940 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0807 18:42:05.173243   12940 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0807 18:42:05.210248   12940 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0807 18:42:05.247886   12940 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0807 18:42:05.293274   12940 ssh_runner.go:195] Run: grep 172.28.239.254	control-plane.minikube.internal$ /etc/hosts
	I0807 18:42:05.304277   12940 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.28.239.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0807 18:42:05.341471   12940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 18:42:05.548457   12940 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0807 18:42:05.579435   12940 host.go:66] Checking if "ha-766300" exists ...
	I0807 18:42:05.580898   12940 start.go:317] joinCluster: &{Name:ha-766300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-766300 Namespace:default APIServerHAVIP:172.28.239.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.224.88 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.238.183 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:172.28.233.130 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 18:42:05.580898   12940 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0807 18:42:05.581465   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300 ).state
	I0807 18:42:07.840234   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:42:07.841260   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:42:07.841587   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300 ).networkadapters[0]).ipaddresses[0]
	I0807 18:42:10.567955   12940 main.go:141] libmachine: [stdout =====>] : 172.28.224.88
	
	I0807 18:42:10.567955   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:42:10.568301   12940 sshutil.go:53] new ssh client: &{IP:172.28.224.88 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-766300\id_rsa Username:docker}
	I0807 18:42:10.791697   12940 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0": (5.2106672s)
	I0807 18:42:10.791764   12940 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:172.28.233.130 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0807 18:42:10.791879   12940 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zvef20.grw9eubfzckouhp2 --discovery-token-ca-cert-hash sha256:54111b4769238fc338d0511ba57c177f732a6d68c0b9cd0aa8cacbbf3c79643b --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-766300-m03 --control-plane --apiserver-advertise-address=172.28.233.130 --apiserver-bind-port=8443"
	I0807 18:42:57.063467   12940 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zvef20.grw9eubfzckouhp2 --discovery-token-ca-cert-hash sha256:54111b4769238fc338d0511ba57c177f732a6d68c0b9cd0aa8cacbbf3c79643b --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-766300-m03 --control-plane --apiserver-advertise-address=172.28.233.130 --apiserver-bind-port=8443": (46.2710007s)
	I0807 18:42:57.063467   12940 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0807 18:42:58.127621   12940 ssh_runner.go:235] Completed: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet": (1.0641404s)
	I0807 18:42:58.142729   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-766300-m03 minikube.k8s.io/updated_at=2024_08_07T18_42_58_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e minikube.k8s.io/name=ha-766300 minikube.k8s.io/primary=false
	I0807 18:42:58.349806   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-766300-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0807 18:42:58.520326   12940 start.go:319] duration metric: took 52.9387557s to joinCluster
	I0807 18:42:58.520546   12940 start.go:235] Will wait 6m0s for node &{Name:m03 IP:172.28.233.130 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0807 18:42:58.521596   12940 config.go:182] Loaded profile config "ha-766300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 18:42:58.523624   12940 out.go:177] * Verifying Kubernetes components...
	I0807 18:42:58.539832   12940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 18:42:58.930394   12940 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0807 18:42:58.959404   12940 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0807 18:42:58.960394   12940 kapi.go:59] client config for ha-766300: &rest.Config{Host:"https://172.28.239.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-766300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-766300\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1da64c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0807 18:42:58.960394   12940 kubeadm.go:483] Overriding stale ClientConfig host https://172.28.239.254:8443 with https://172.28.224.88:8443
	I0807 18:42:58.961398   12940 node_ready.go:35] waiting up to 6m0s for node "ha-766300-m03" to be "Ready" ...
	I0807 18:42:58.961398   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:42:58.961398   12940 round_trippers.go:469] Request Headers:
	I0807 18:42:58.961398   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:42:58.961398   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:42:58.975390   12940 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0807 18:42:59.471804   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:42:59.471804   12940 round_trippers.go:469] Request Headers:
	I0807 18:42:59.471804   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:42:59.471804   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:42:59.477960   12940 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0807 18:42:59.976514   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:42:59.976514   12940 round_trippers.go:469] Request Headers:
	I0807 18:42:59.976514   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:42:59.976598   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:42:59.982033   12940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 18:43:00.469212   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:00.469212   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:00.469212   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:00.469212   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:00.486553   12940 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0807 18:43:00.974114   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:00.974114   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:00.974114   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:00.974114   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:00.979122   12940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 18:43:00.980473   12940 node_ready.go:53] node "ha-766300-m03" has status "Ready":"False"
	I0807 18:43:01.465168   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:01.465482   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:01.465482   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:01.465517   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:01.472239   12940 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0807 18:43:01.970651   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:01.970781   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:01.970781   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:01.970781   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:01.975827   12940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 18:43:02.464253   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:02.464324   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:02.464324   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:02.464324   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:02.479941   12940 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0807 18:43:02.967010   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:02.967066   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:02.967066   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:02.967066   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:02.970871   12940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:43:03.474836   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:03.474836   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:03.474836   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:03.474836   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:03.485825   12940 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0807 18:43:03.488511   12940 node_ready.go:53] node "ha-766300-m03" has status "Ready":"False"
	I0807 18:43:03.964325   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:03.964409   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:03.964469   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:03.964469   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:03.969711   12940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 18:43:04.465208   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:04.465208   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:04.465208   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:04.465208   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:04.468817   12940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:43:04.968575   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:04.968741   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:04.968741   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:04.968741   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:04.973142   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:43:05.468065   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:05.468065   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:05.468065   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:05.468065   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:05.473014   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:43:05.972479   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:05.972705   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:05.972705   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:05.972705   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:05.977324   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:43:05.979245   12940 node_ready.go:53] node "ha-766300-m03" has status "Ready":"False"
	I0807 18:43:06.473589   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:06.474363   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:06.474972   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:06.474972   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:06.480343   12940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 18:43:06.963922   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:06.964017   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:06.964017   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:06.964017   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:06.969382   12940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 18:43:07.475433   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:07.475433   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:07.475500   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:07.475500   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:07.483947   12940 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0807 18:43:07.962585   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:07.962665   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:07.962665   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:07.962665   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:07.966581   12940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:43:08.464596   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:08.464596   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:08.464596   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:08.464596   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:08.472255   12940 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0807 18:43:08.473958   12940 node_ready.go:53] node "ha-766300-m03" has status "Ready":"False"
	I0807 18:43:08.967167   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:08.967244   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:08.967244   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:08.967244   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:08.972982   12940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 18:43:09.467015   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:09.467015   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:09.467186   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:09.467186   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:09.474622   12940 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0807 18:43:09.962030   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:09.962105   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:09.962105   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:09.962105   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:09.967555   12940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 18:43:10.474381   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:10.474458   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:10.474458   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:10.474458   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:10.486878   12940 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0807 18:43:10.487430   12940 node_ready.go:53] node "ha-766300-m03" has status "Ready":"False"
	I0807 18:43:10.971561   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:10.971561   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:10.971561   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:10.971561   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:10.977374   12940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 18:43:11.469440   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:11.469440   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:11.469440   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:11.469440   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:11.476071   12940 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0807 18:43:11.970851   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:11.970926   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:11.970926   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:11.970926   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:11.976504   12940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 18:43:12.470890   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:12.470956   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:12.470956   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:12.470956   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:12.477755   12940 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0807 18:43:12.973263   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:12.973495   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:12.973495   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:12.973495   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:12.979744   12940 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0807 18:43:12.980367   12940 node_ready.go:53] node "ha-766300-m03" has status "Ready":"False"
	I0807 18:43:13.465744   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:13.465744   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:13.465744   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:13.465744   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:13.471908   12940 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0807 18:43:13.966672   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:13.966741   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:13.966741   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:13.966741   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:13.971370   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:43:14.470881   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:14.470881   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:14.470881   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:14.470881   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:14.478623   12940 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0807 18:43:14.972870   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:14.972870   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:14.972870   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:14.972870   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:14.977518   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:43:15.473915   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:15.474124   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:15.474124   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:15.474124   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:15.479622   12940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 18:43:15.480010   12940 node_ready.go:53] node "ha-766300-m03" has status "Ready":"False"
	I0807 18:43:15.977679   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:15.977790   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:15.977790   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:15.977790   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:15.984385   12940 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0807 18:43:16.475709   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:16.475823   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:16.475823   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:16.475823   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:16.480231   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:43:16.976677   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:16.976677   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:16.976677   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:16.976677   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:16.981339   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:43:17.474910   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:17.475038   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:17.475038   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:17.475038   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:17.479865   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:43:17.480934   12940 node_ready.go:53] node "ha-766300-m03" has status "Ready":"False"
	I0807 18:43:17.977255   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:17.977255   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:17.977255   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:17.977255   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:17.982853   12940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 18:43:18.464009   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:18.464009   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:18.464009   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:18.464009   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:18.469360   12940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 18:43:18.962855   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:18.962855   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:18.962855   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:18.962855   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:18.967456   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:43:19.463909   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:19.464432   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:19.464432   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:19.464432   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:19.477247   12940 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0807 18:43:19.478250   12940 node_ready.go:49] node "ha-766300-m03" has status "Ready":"True"
	I0807 18:43:19.478250   12940 node_ready.go:38] duration metric: took 20.5165916s for node "ha-766300-m03" to be "Ready" ...
	I0807 18:43:19.478250   12940 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0807 18:43:19.478250   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods
	I0807 18:43:19.478250   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:19.478250   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:19.478250   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:19.489269   12940 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0807 18:43:19.498259   12940 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-9tjv6" in "kube-system" namespace to be "Ready" ...
	I0807 18:43:19.499247   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-9tjv6
	I0807 18:43:19.499247   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:19.499247   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:19.499247   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:19.503243   12940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:43:19.504793   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300
	I0807 18:43:19.504793   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:19.504793   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:19.504793   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:19.509781   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:43:19.510689   12940 pod_ready.go:92] pod "coredns-7db6d8ff4d-9tjv6" in "kube-system" namespace has status "Ready":"True"
	I0807 18:43:19.510689   12940 pod_ready.go:81] duration metric: took 11.4418ms for pod "coredns-7db6d8ff4d-9tjv6" in "kube-system" namespace to be "Ready" ...
	I0807 18:43:19.510689   12940 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-fqjwg" in "kube-system" namespace to be "Ready" ...
	I0807 18:43:19.510689   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fqjwg
	I0807 18:43:19.510689   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:19.510689   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:19.510689   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:19.515262   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:43:19.516718   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300
	I0807 18:43:19.516718   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:19.516718   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:19.516718   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:19.520310   12940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:43:19.521331   12940 pod_ready.go:92] pod "coredns-7db6d8ff4d-fqjwg" in "kube-system" namespace has status "Ready":"True"
	I0807 18:43:19.521331   12940 pod_ready.go:81] duration metric: took 10.6419ms for pod "coredns-7db6d8ff4d-fqjwg" in "kube-system" namespace to be "Ready" ...
	I0807 18:43:19.521331   12940 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-766300" in "kube-system" namespace to be "Ready" ...
	I0807 18:43:19.521331   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/etcd-ha-766300
	I0807 18:43:19.521331   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:19.521331   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:19.521331   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:19.541221   12940 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0807 18:43:19.542172   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300
	I0807 18:43:19.542237   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:19.542237   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:19.542237   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:19.546428   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:43:19.547398   12940 pod_ready.go:92] pod "etcd-ha-766300" in "kube-system" namespace has status "Ready":"True"
	I0807 18:43:19.547456   12940 pod_ready.go:81] duration metric: took 26.1251ms for pod "etcd-ha-766300" in "kube-system" namespace to be "Ready" ...
	I0807 18:43:19.547456   12940 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-766300-m02" in "kube-system" namespace to be "Ready" ...
	I0807 18:43:19.547522   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/etcd-ha-766300-m02
	I0807 18:43:19.547522   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:19.547522   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:19.547522   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:19.551433   12940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:43:19.551433   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:43:19.551433   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:19.551433   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:19.551433   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:19.554395   12940 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 18:43:19.554395   12940 pod_ready.go:92] pod "etcd-ha-766300-m02" in "kube-system" namespace has status "Ready":"True"
	I0807 18:43:19.554395   12940 pod_ready.go:81] duration metric: took 6.9386ms for pod "etcd-ha-766300-m02" in "kube-system" namespace to be "Ready" ...
	I0807 18:43:19.554395   12940 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-766300-m03" in "kube-system" namespace to be "Ready" ...
	I0807 18:43:19.670818   12940 request.go:629] Waited for 116.3357ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/etcd-ha-766300-m03
	I0807 18:43:19.670953   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/etcd-ha-766300-m03
	I0807 18:43:19.670953   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:19.670953   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:19.670953   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:19.675593   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:43:19.878264   12940 request.go:629] Waited for 201.643ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:19.878465   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:19.878465   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:19.878549   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:19.878549   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:19.881872   12940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:43:19.883521   12940 pod_ready.go:92] pod "etcd-ha-766300-m03" in "kube-system" namespace has status "Ready":"True"
	I0807 18:43:19.883521   12940 pod_ready.go:81] duration metric: took 329.1223ms for pod "etcd-ha-766300-m03" in "kube-system" namespace to be "Ready" ...
	I0807 18:43:19.883620   12940 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-766300" in "kube-system" namespace to be "Ready" ...
	I0807 18:43:20.066588   12940 request.go:629] Waited for 182.5886ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-766300
	I0807 18:43:20.066667   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-766300
	I0807 18:43:20.066749   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:20.066749   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:20.066749   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:20.071053   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:43:20.269590   12940 request.go:629] Waited for 197.0445ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/nodes/ha-766300
	I0807 18:43:20.269590   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300
	I0807 18:43:20.269590   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:20.269590   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:20.269590   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:20.274427   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:43:20.274984   12940 pod_ready.go:92] pod "kube-apiserver-ha-766300" in "kube-system" namespace has status "Ready":"True"
	I0807 18:43:20.275517   12940 pod_ready.go:81] duration metric: took 391.8924ms for pod "kube-apiserver-ha-766300" in "kube-system" namespace to be "Ready" ...
	I0807 18:43:20.275599   12940 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-766300-m02" in "kube-system" namespace to be "Ready" ...
	I0807 18:43:20.471784   12940 request.go:629] Waited for 196.1829ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-766300-m02
	I0807 18:43:20.472010   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-766300-m02
	I0807 18:43:20.472010   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:20.472010   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:20.472010   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:20.477953   12940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 18:43:20.675312   12940 request.go:629] Waited for 196.3356ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:43:20.675455   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:43:20.675455   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:20.675455   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:20.675455   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:20.685109   12940 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0807 18:43:20.686302   12940 pod_ready.go:92] pod "kube-apiserver-ha-766300-m02" in "kube-system" namespace has status "Ready":"True"
	I0807 18:43:20.686302   12940 pod_ready.go:81] duration metric: took 410.6979ms for pod "kube-apiserver-ha-766300-m02" in "kube-system" namespace to be "Ready" ...
	I0807 18:43:20.686302   12940 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-766300-m03" in "kube-system" namespace to be "Ready" ...
	I0807 18:43:20.878746   12940 request.go:629] Waited for 192.3254ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-766300-m03
	I0807 18:43:20.878746   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-766300-m03
	I0807 18:43:20.878746   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:20.878746   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:20.878746   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:20.883612   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:43:21.069744   12940 request.go:629] Waited for 184.4783ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:21.069744   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:21.069744   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:21.069744   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:21.069744   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:21.076477   12940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 18:43:21.077009   12940 pod_ready.go:92] pod "kube-apiserver-ha-766300-m03" in "kube-system" namespace has status "Ready":"True"
	I0807 18:43:21.077009   12940 pod_ready.go:81] duration metric: took 390.7018ms for pod "kube-apiserver-ha-766300-m03" in "kube-system" namespace to be "Ready" ...
	I0807 18:43:21.077009   12940 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-766300" in "kube-system" namespace to be "Ready" ...
	I0807 18:43:21.273746   12940 request.go:629] Waited for 196.7021ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-766300
	I0807 18:43:21.273804   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-766300
	I0807 18:43:21.273929   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:21.273929   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:21.273929   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:21.279497   12940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 18:43:21.477806   12940 request.go:629] Waited for 196.9576ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/nodes/ha-766300
	I0807 18:43:21.477999   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300
	I0807 18:43:21.477999   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:21.478092   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:21.478154   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:21.483071   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:43:21.484133   12940 pod_ready.go:92] pod "kube-controller-manager-ha-766300" in "kube-system" namespace has status "Ready":"True"
	I0807 18:43:21.484191   12940 pod_ready.go:81] duration metric: took 407.1771ms for pod "kube-controller-manager-ha-766300" in "kube-system" namespace to be "Ready" ...
	I0807 18:43:21.484191   12940 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-766300-m02" in "kube-system" namespace to be "Ready" ...
	I0807 18:43:21.665332   12940 request.go:629] Waited for 180.9277ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-766300-m02
	I0807 18:43:21.665450   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-766300-m02
	I0807 18:43:21.665450   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:21.665450   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:21.665450   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:21.676054   12940 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0807 18:43:21.870230   12940 request.go:629] Waited for 192.1494ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:43:21.870419   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:43:21.870529   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:21.870529   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:21.870529   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:21.876093   12940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 18:43:21.877076   12940 pod_ready.go:92] pod "kube-controller-manager-ha-766300-m02" in "kube-system" namespace has status "Ready":"True"
	I0807 18:43:21.877076   12940 pod_ready.go:81] duration metric: took 392.8803ms for pod "kube-controller-manager-ha-766300-m02" in "kube-system" namespace to be "Ready" ...
	I0807 18:43:21.877076   12940 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-766300-m03" in "kube-system" namespace to be "Ready" ...
	I0807 18:43:22.073598   12940 request.go:629] Waited for 196.1302ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-766300-m03
	I0807 18:43:22.073805   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-766300-m03
	I0807 18:43:22.073964   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:22.073964   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:22.073964   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:22.082776   12940 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0807 18:43:22.277313   12940 request.go:629] Waited for 193.2701ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:22.277493   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:22.277542   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:22.277542   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:22.277542   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:22.285315   12940 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0807 18:43:22.286310   12940 pod_ready.go:92] pod "kube-controller-manager-ha-766300-m03" in "kube-system" namespace has status "Ready":"True"
	I0807 18:43:22.286344   12940 pod_ready.go:81] duration metric: took 409.2629ms for pod "kube-controller-manager-ha-766300-m03" in "kube-system" namespace to be "Ready" ...
	I0807 18:43:22.286344   12940 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8v6vm" in "kube-system" namespace to be "Ready" ...
	I0807 18:43:22.464754   12940 request.go:629] Waited for 178.3054ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8v6vm
	I0807 18:43:22.465005   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8v6vm
	I0807 18:43:22.465005   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:22.465096   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:22.465096   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:22.469502   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:43:22.668243   12940 request.go:629] Waited for 196.6128ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:43:22.668447   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:43:22.668447   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:22.668447   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:22.668447   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:22.682478   12940 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0807 18:43:22.683974   12940 pod_ready.go:92] pod "kube-proxy-8v6vm" in "kube-system" namespace has status "Ready":"True"
	I0807 18:43:22.683974   12940 pod_ready.go:81] duration metric: took 397.6242ms for pod "kube-proxy-8v6vm" in "kube-system" namespace to be "Ready" ...
	I0807 18:43:22.683974   12940 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-d6ckx" in "kube-system" namespace to be "Ready" ...
	I0807 18:43:22.870634   12940 request.go:629] Waited for 186.3767ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d6ckx
	I0807 18:43:22.870918   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d6ckx
	I0807 18:43:22.871009   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:22.871009   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:22.871009   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:22.876123   12940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 18:43:23.074586   12940 request.go:629] Waited for 196.3446ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/nodes/ha-766300
	I0807 18:43:23.074811   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300
	I0807 18:43:23.074811   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:23.074894   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:23.074894   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:23.078401   12940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:43:23.079549   12940 pod_ready.go:92] pod "kube-proxy-d6ckx" in "kube-system" namespace has status "Ready":"True"
	I0807 18:43:23.080083   12940 pod_ready.go:81] duration metric: took 396.1045ms for pod "kube-proxy-d6ckx" in "kube-system" namespace to be "Ready" ...
	I0807 18:43:23.080083   12940 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mlf2g" in "kube-system" namespace to be "Ready" ...
	I0807 18:43:23.278710   12940 request.go:629] Waited for 198.4724ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mlf2g
	I0807 18:43:23.278955   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mlf2g
	I0807 18:43:23.279050   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:23.279050   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:23.279050   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:23.286501   12940 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0807 18:43:23.466811   12940 request.go:629] Waited for 178.9145ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:23.466946   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:23.466946   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:23.466946   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:23.466946   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:23.471530   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:43:23.472639   12940 pod_ready.go:92] pod "kube-proxy-mlf2g" in "kube-system" namespace has status "Ready":"True"
	I0807 18:43:23.472749   12940 pod_ready.go:81] duration metric: took 392.6612ms for pod "kube-proxy-mlf2g" in "kube-system" namespace to be "Ready" ...
	I0807 18:43:23.472749   12940 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-766300" in "kube-system" namespace to be "Ready" ...
	I0807 18:43:23.669830   12940 request.go:629] Waited for 196.8004ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-766300
	I0807 18:43:23.669830   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-766300
	I0807 18:43:23.669830   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:23.669830   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:23.669830   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:23.674477   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:43:23.871370   12940 request.go:629] Waited for 194.707ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/nodes/ha-766300
	I0807 18:43:23.871370   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300
	I0807 18:43:23.871370   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:23.871706   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:23.871706   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:23.879775   12940 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0807 18:43:23.882573   12940 pod_ready.go:92] pod "kube-scheduler-ha-766300" in "kube-system" namespace has status "Ready":"True"
	I0807 18:43:23.882573   12940 pod_ready.go:81] duration metric: took 409.8183ms for pod "kube-scheduler-ha-766300" in "kube-system" namespace to be "Ready" ...
	I0807 18:43:23.882573   12940 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-766300-m02" in "kube-system" namespace to be "Ready" ...
	I0807 18:43:24.074748   12940 request.go:629] Waited for 191.5798ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-766300-m02
	I0807 18:43:24.074748   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-766300-m02
	I0807 18:43:24.074748   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:24.074748   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:24.074748   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:24.079344   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:43:24.276529   12940 request.go:629] Waited for 195.2434ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:43:24.276863   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:43:24.277008   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:24.277071   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:24.277071   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:24.282332   12940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 18:43:24.283671   12940 pod_ready.go:92] pod "kube-scheduler-ha-766300-m02" in "kube-system" namespace has status "Ready":"True"
	I0807 18:43:24.283671   12940 pod_ready.go:81] duration metric: took 401.0932ms for pod "kube-scheduler-ha-766300-m02" in "kube-system" namespace to be "Ready" ...
	I0807 18:43:24.283671   12940 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-766300-m03" in "kube-system" namespace to be "Ready" ...
	I0807 18:43:24.466003   12940 request.go:629] Waited for 182.3298ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-766300-m03
	I0807 18:43:24.466003   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-766300-m03
	I0807 18:43:24.466003   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:24.466003   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:24.466003   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:24.470380   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:43:24.670602   12940 request.go:629] Waited for 198.2502ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:24.670857   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:24.670857   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:24.670857   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:24.670857   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:24.675918   12940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 18:43:24.677418   12940 pod_ready.go:92] pod "kube-scheduler-ha-766300-m03" in "kube-system" namespace has status "Ready":"True"
	I0807 18:43:24.677527   12940 pod_ready.go:81] duration metric: took 393.8507ms for pod "kube-scheduler-ha-766300-m03" in "kube-system" namespace to be "Ready" ...
	I0807 18:43:24.677527   12940 pod_ready.go:38] duration metric: took 5.1992106s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0807 18:43:24.677637   12940 api_server.go:52] waiting for apiserver process to appear ...
	I0807 18:43:24.689204   12940 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0807 18:43:24.720138   12940 api_server.go:72] duration metric: took 26.199119s to wait for apiserver process to appear ...
	I0807 18:43:24.720187   12940 api_server.go:88] waiting for apiserver healthz status ...
	I0807 18:43:24.720187   12940 api_server.go:253] Checking apiserver healthz at https://172.28.224.88:8443/healthz ...
	I0807 18:43:24.729167   12940 api_server.go:279] https://172.28.224.88:8443/healthz returned 200:
	ok
	I0807 18:43:24.729167   12940 round_trippers.go:463] GET https://172.28.224.88:8443/version
	I0807 18:43:24.729167   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:24.729167   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:24.729167   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:24.731170   12940 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 18:43:24.731234   12940 api_server.go:141] control plane version: v1.30.3
	I0807 18:43:24.731234   12940 api_server.go:131] duration metric: took 11.0465ms to wait for apiserver health ...
	I0807 18:43:24.731234   12940 system_pods.go:43] waiting for kube-system pods to appear ...
	I0807 18:43:24.871985   12940 request.go:629] Waited for 140.4152ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods
	I0807 18:43:24.871985   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods
	I0807 18:43:24.871985   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:24.871985   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:24.871985   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:24.881943   12940 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0807 18:43:24.892392   12940 system_pods.go:59] 24 kube-system pods found
	I0807 18:43:24.892392   12940 system_pods.go:61] "coredns-7db6d8ff4d-9tjv6" [54967df0-ac2c-4024-8947-b4e972a4b59a] Running
	I0807 18:43:24.892392   12940 system_pods.go:61] "coredns-7db6d8ff4d-fqjwg" [cc54cc3e-f40c-43c2-ac25-25bd315c3dd9] Running
	I0807 18:43:24.892392   12940 system_pods.go:61] "etcd-ha-766300" [5c619c4a-4fd5-494f-bb7b-80754258d40a] Running
	I0807 18:43:24.892392   12940 system_pods.go:61] "etcd-ha-766300-m02" [97b2b2f2-ea73-4de0-86aa-4854386b8f71] Running
	I0807 18:43:24.892392   12940 system_pods.go:61] "etcd-ha-766300-m03" [ddccee16-221c-4663-a38b-85a76115baf0] Running
	I0807 18:43:24.892392   12940 system_pods.go:61] "kindnet-6dc82" [d789c5c0-bde5-4abe-9bdd-515ce5c1a0f8] Running
	I0807 18:43:24.892392   12940 system_pods.go:61] "kindnet-gh6wt" [35666307-476d-460d-af1d-23d3bae8aec2] Running
	I0807 18:43:24.892392   12940 system_pods.go:61] "kindnet-scfzz" [ad036ebf-9679-47a6-b8e0-f433a34f55cb] Running
	I0807 18:43:24.892392   12940 system_pods.go:61] "kube-apiserver-ha-766300" [d1f122ef-d89f-4a4f-8194-86e5e84faea4] Running
	I0807 18:43:24.892392   12940 system_pods.go:61] "kube-apiserver-ha-766300-m02" [249c438f-592d-47ba-bf0b-252bde32a27d] Running
	I0807 18:43:24.892392   12940 system_pods.go:61] "kube-apiserver-ha-766300-m03" [27bb05ab-2345-469b-b8da-3f8c65d4c6cb] Running
	I0807 18:43:24.892392   12940 system_pods.go:61] "kube-controller-manager-ha-766300" [648bbb2b-06b4-487b-a9fa-c530a7ed5d11] Running
	I0807 18:43:24.892392   12940 system_pods.go:61] "kube-controller-manager-ha-766300-m02" [c8ab36c4-89ca-4519-8eaa-c27c00b78095] Running
	I0807 18:43:24.892392   12940 system_pods.go:61] "kube-controller-manager-ha-766300-m03" [91ce3e9c-5a16-483a-86cb-9eb67ae4825d] Running
	I0807 18:43:24.892392   12940 system_pods.go:61] "kube-proxy-8v6vm" [c6fa744a-fc9b-4da6-933a-866565e8318c] Running
	I0807 18:43:24.893784   12940 system_pods.go:61] "kube-proxy-d6ckx" [257858b0-6bb6-4bfb-9b5c-591fdb24929e] Running
	I0807 18:43:24.893784   12940 system_pods.go:61] "kube-proxy-mlf2g" [2b76f921-687d-4c43-bf2c-d3e8e5b865b2] Running
	I0807 18:43:24.893784   12940 system_pods.go:61] "kube-scheduler-ha-766300" [1d44914f-67d1-4b8f-934c-273d21dc7d60] Running
	I0807 18:43:24.893784   12940 system_pods.go:61] "kube-scheduler-ha-766300-m02" [22b9a1c1-e369-4270-90f6-f3caa10e0705] Running
	I0807 18:43:24.893784   12940 system_pods.go:61] "kube-scheduler-ha-766300-m03" [d32e668c-e2b9-42ed-944d-d3d4060c717b] Running
	I0807 18:43:24.893784   12940 system_pods.go:61] "kube-vip-ha-766300" [e2b31b5c-6e03-4e58-8cb4-10fc6869812b] Running
	I0807 18:43:24.893784   12940 system_pods.go:61] "kube-vip-ha-766300-m02" [0034d823-e21f-4be0-bbdb-09db13937fb7] Running
	I0807 18:43:24.893784   12940 system_pods.go:61] "kube-vip-ha-766300-m03" [cd71094c-0861-4ae6-86b3-051b3b3f8c63] Running
	I0807 18:43:24.893784   12940 system_pods.go:61] "storage-provisioner" [9a8a8ca1-bdd6-4ca8-a2d4-de3839223c9c] Running
	I0807 18:43:24.893784   12940 system_pods.go:74] duration metric: took 162.5482ms to wait for pod list to return data ...
	I0807 18:43:24.893784   12940 default_sa.go:34] waiting for default service account to be created ...
	I0807 18:43:25.074689   12940 request.go:629] Waited for 180.5999ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/namespaces/default/serviceaccounts
	I0807 18:43:25.074689   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/namespaces/default/serviceaccounts
	I0807 18:43:25.074689   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:25.074689   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:25.074689   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:25.079277   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:43:25.080589   12940 default_sa.go:45] found service account: "default"
	I0807 18:43:25.080589   12940 default_sa.go:55] duration metric: took 186.8029ms for default service account to be created ...
	I0807 18:43:25.080589   12940 system_pods.go:116] waiting for k8s-apps to be running ...
	I0807 18:43:25.264503   12940 request.go:629] Waited for 183.7581ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods
	I0807 18:43:25.264694   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods
	I0807 18:43:25.264694   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:25.264694   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:25.264694   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:25.273263   12940 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0807 18:43:25.284965   12940 system_pods.go:86] 24 kube-system pods found
	I0807 18:43:25.284965   12940 system_pods.go:89] "coredns-7db6d8ff4d-9tjv6" [54967df0-ac2c-4024-8947-b4e972a4b59a] Running
	I0807 18:43:25.284965   12940 system_pods.go:89] "coredns-7db6d8ff4d-fqjwg" [cc54cc3e-f40c-43c2-ac25-25bd315c3dd9] Running
	I0807 18:43:25.284965   12940 system_pods.go:89] "etcd-ha-766300" [5c619c4a-4fd5-494f-bb7b-80754258d40a] Running
	I0807 18:43:25.284965   12940 system_pods.go:89] "etcd-ha-766300-m02" [97b2b2f2-ea73-4de0-86aa-4854386b8f71] Running
	I0807 18:43:25.284965   12940 system_pods.go:89] "etcd-ha-766300-m03" [ddccee16-221c-4663-a38b-85a76115baf0] Running
	I0807 18:43:25.284965   12940 system_pods.go:89] "kindnet-6dc82" [d789c5c0-bde5-4abe-9bdd-515ce5c1a0f8] Running
	I0807 18:43:25.284965   12940 system_pods.go:89] "kindnet-gh6wt" [35666307-476d-460d-af1d-23d3bae8aec2] Running
	I0807 18:43:25.284965   12940 system_pods.go:89] "kindnet-scfzz" [ad036ebf-9679-47a6-b8e0-f433a34f55cb] Running
	I0807 18:43:25.284965   12940 system_pods.go:89] "kube-apiserver-ha-766300" [d1f122ef-d89f-4a4f-8194-86e5e84faea4] Running
	I0807 18:43:25.284965   12940 system_pods.go:89] "kube-apiserver-ha-766300-m02" [249c438f-592d-47ba-bf0b-252bde32a27d] Running
	I0807 18:43:25.284965   12940 system_pods.go:89] "kube-apiserver-ha-766300-m03" [27bb05ab-2345-469b-b8da-3f8c65d4c6cb] Running
	I0807 18:43:25.284965   12940 system_pods.go:89] "kube-controller-manager-ha-766300" [648bbb2b-06b4-487b-a9fa-c530a7ed5d11] Running
	I0807 18:43:25.284965   12940 system_pods.go:89] "kube-controller-manager-ha-766300-m02" [c8ab36c4-89ca-4519-8eaa-c27c00b78095] Running
	I0807 18:43:25.284965   12940 system_pods.go:89] "kube-controller-manager-ha-766300-m03" [91ce3e9c-5a16-483a-86cb-9eb67ae4825d] Running
	I0807 18:43:25.284965   12940 system_pods.go:89] "kube-proxy-8v6vm" [c6fa744a-fc9b-4da6-933a-866565e8318c] Running
	I0807 18:43:25.284965   12940 system_pods.go:89] "kube-proxy-d6ckx" [257858b0-6bb6-4bfb-9b5c-591fdb24929e] Running
	I0807 18:43:25.284965   12940 system_pods.go:89] "kube-proxy-mlf2g" [2b76f921-687d-4c43-bf2c-d3e8e5b865b2] Running
	I0807 18:43:25.284965   12940 system_pods.go:89] "kube-scheduler-ha-766300" [1d44914f-67d1-4b8f-934c-273d21dc7d60] Running
	I0807 18:43:25.284965   12940 system_pods.go:89] "kube-scheduler-ha-766300-m02" [22b9a1c1-e369-4270-90f6-f3caa10e0705] Running
	I0807 18:43:25.284965   12940 system_pods.go:89] "kube-scheduler-ha-766300-m03" [d32e668c-e2b9-42ed-944d-d3d4060c717b] Running
	I0807 18:43:25.284965   12940 system_pods.go:89] "kube-vip-ha-766300" [e2b31b5c-6e03-4e58-8cb4-10fc6869812b] Running
	I0807 18:43:25.284965   12940 system_pods.go:89] "kube-vip-ha-766300-m02" [0034d823-e21f-4be0-bbdb-09db13937fb7] Running
	I0807 18:43:25.284965   12940 system_pods.go:89] "kube-vip-ha-766300-m03" [cd71094c-0861-4ae6-86b3-051b3b3f8c63] Running
	I0807 18:43:25.284965   12940 system_pods.go:89] "storage-provisioner" [9a8a8ca1-bdd6-4ca8-a2d4-de3839223c9c] Running
	I0807 18:43:25.284965   12940 system_pods.go:126] duration metric: took 204.3729ms to wait for k8s-apps to be running ...
	I0807 18:43:25.284965   12940 system_svc.go:44] waiting for kubelet service to be running ....
	I0807 18:43:25.295880   12940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0807 18:43:25.321563   12940 system_svc.go:56] duration metric: took 36.598ms WaitForService to wait for kubelet
	I0807 18:43:25.321563   12940 kubeadm.go:582] duration metric: took 26.8005931s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0807 18:43:25.321563   12940 node_conditions.go:102] verifying NodePressure condition ...
	I0807 18:43:25.466442   12940 request.go:629] Waited for 144.8775ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/nodes
	I0807 18:43:25.466442   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes
	I0807 18:43:25.466442   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:25.466442   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:25.466442   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:25.472262   12940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 18:43:25.473818   12940 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0807 18:43:25.473874   12940 node_conditions.go:123] node cpu capacity is 2
	I0807 18:43:25.473874   12940 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0807 18:43:25.473874   12940 node_conditions.go:123] node cpu capacity is 2
	I0807 18:43:25.473874   12940 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0807 18:43:25.473874   12940 node_conditions.go:123] node cpu capacity is 2
	I0807 18:43:25.473874   12940 node_conditions.go:105] duration metric: took 152.3093ms to run NodePressure ...
	I0807 18:43:25.473967   12940 start.go:241] waiting for startup goroutines ...
	I0807 18:43:25.473996   12940 start.go:255] writing updated cluster config ...
	I0807 18:43:25.485867   12940 ssh_runner.go:195] Run: rm -f paused
	I0807 18:43:25.635342   12940 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0807 18:43:25.644678   12940 out.go:177] * Done! kubectl is now configured to use "ha-766300" cluster and "default" namespace by default
	
	
	==> Docker <==
	Aug 07 18:35:18 ha-766300 cri-dockerd[1322]: time="2024-08-07T18:35:18Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/12d6c9334d4425d43319143dec237fcd1d312fef7c677a9975134d01282056a6/resolv.conf as [nameserver 172.28.224.1]"
	Aug 07 18:35:18 ha-766300 cri-dockerd[1322]: time="2024-08-07T18:35:18Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/dde8345db34d686c4a2d04fd42f437311c2dff12db4e4dd99e35580a5452eb95/resolv.conf as [nameserver 172.28.224.1]"
	Aug 07 18:35:18 ha-766300 cri-dockerd[1322]: time="2024-08-07T18:35:18Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2a4270fc3f1c85a3f133cecec4a09f34590f6c234212ceba02843e977d9caa7f/resolv.conf as [nameserver 172.28.224.1]"
	Aug 07 18:35:18 ha-766300 dockerd[1431]: time="2024-08-07T18:35:18.691842981Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 18:35:18 ha-766300 dockerd[1431]: time="2024-08-07T18:35:18.692394613Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 18:35:18 ha-766300 dockerd[1431]: time="2024-08-07T18:35:18.692469918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 18:35:18 ha-766300 dockerd[1431]: time="2024-08-07T18:35:18.692647328Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 18:35:18 ha-766300 dockerd[1431]: time="2024-08-07T18:35:18.944485850Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 18:35:18 ha-766300 dockerd[1431]: time="2024-08-07T18:35:18.944877973Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 18:35:18 ha-766300 dockerd[1431]: time="2024-08-07T18:35:18.944907275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 18:35:18 ha-766300 dockerd[1431]: time="2024-08-07T18:35:18.946413663Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 18:35:18 ha-766300 dockerd[1431]: time="2024-08-07T18:35:18.984554192Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 18:35:18 ha-766300 dockerd[1431]: time="2024-08-07T18:35:18.984715702Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 18:35:18 ha-766300 dockerd[1431]: time="2024-08-07T18:35:18.984736103Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 18:35:18 ha-766300 dockerd[1431]: time="2024-08-07T18:35:18.984851310Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 18:44:05 ha-766300 dockerd[1431]: time="2024-08-07T18:44:05.936488548Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 18:44:05 ha-766300 dockerd[1431]: time="2024-08-07T18:44:05.937535814Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 18:44:05 ha-766300 dockerd[1431]: time="2024-08-07T18:44:05.937679123Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 18:44:05 ha-766300 dockerd[1431]: time="2024-08-07T18:44:05.938192155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 18:44:06 ha-766300 cri-dockerd[1322]: time="2024-08-07T18:44:06Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8fddb084e3687ad8a0d4294508da0d90d7fb78fa7e19d31c34592dc1b225afab/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Aug 07 18:44:07 ha-766300 cri-dockerd[1322]: time="2024-08-07T18:44:07Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Aug 07 18:44:07 ha-766300 dockerd[1431]: time="2024-08-07T18:44:07.878883437Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 18:44:07 ha-766300 dockerd[1431]: time="2024-08-07T18:44:07.879041138Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 18:44:07 ha-766300 dockerd[1431]: time="2024-08-07T18:44:07.879764643Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 18:44:07 ha-766300 dockerd[1431]: time="2024-08-07T18:44:07.880367647Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	23194f269aa45       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   About a minute ago   Running             busybox                   0                   8fddb084e3687       busybox-fc5497c4f-bjlr2
	16929881bad0a       cbb01a7bd410d                                                                                         9 minutes ago        Running             coredns                   0                   2a4270fc3f1c8       coredns-7db6d8ff4d-9tjv6
	83c48e5354794       cbb01a7bd410d                                                                                         9 minutes ago        Running             coredns                   0                   dde8345db34d6       coredns-7db6d8ff4d-fqjwg
	3c1d664501256       6e38f40d628db                                                                                         9 minutes ago        Running             storage-provisioner       0                   12d6c9334d442       storage-provisioner
	da03949685ffc       kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3              10 minutes ago       Running             kindnet-cni               0                   b832453c59d79       kindnet-scfzz
	0d1a15c98c836       55bb025d2cfa5                                                                                         10 minutes ago       Running             kube-proxy                0                   3bb6abb82e815       kube-proxy-d6ckx
	dfcf346254418       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     10 minutes ago       Running             kube-vip                  0                   f692d837338a8       kube-vip-ha-766300
	a649001975784       3edc18e7b7672                                                                                         10 minutes ago       Running             kube-scheduler            0                   1bb59e814b31c       kube-scheduler-ha-766300
	f0640929d8e27       76932a3b37d7e                                                                                         10 minutes ago       Running             kube-controller-manager   0                   64dcc1244fc8e       kube-controller-manager-ha-766300
	507c64bcc82fe       1f6d574d502f3                                                                                         10 minutes ago       Running             kube-apiserver            0                   ec7864f9c3a86       kube-apiserver-ha-766300
	193edd22f66f2       3861cfcd7c04c                                                                                         10 minutes ago       Running             etcd                      0                   9df588292e306       etcd-ha-766300
	
	
	==> coredns [16929881bad0] <==
	[INFO] 10.244.0.4:44995 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.010623881s
	[INFO] 10.244.0.4:50470 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000176001s
	[INFO] 10.244.2.2:35902 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118301s
	[INFO] 10.244.2.2:43828 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000221402s
	[INFO] 10.244.2.2:54385 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000152001s
	[INFO] 10.244.2.2:54951 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000101201s
	[INFO] 10.244.1.2:47735 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000245002s
	[INFO] 10.244.1.2:42104 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0000699s
	[INFO] 10.244.1.2:56128 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000112801s
	[INFO] 10.244.1.2:52441 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000058201s
	[INFO] 10.244.1.2:38748 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000134701s
	[INFO] 10.244.1.2:52360 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000069401s
	[INFO] 10.244.0.4:57534 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000181801s
	[INFO] 10.244.0.4:58557 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000104201s
	[INFO] 10.244.2.2:55827 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000097801s
	[INFO] 10.244.2.2:45886 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000612s
	[INFO] 10.244.1.2:51840 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118401s
	[INFO] 10.244.1.2:34688 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000172301s
	[INFO] 10.244.0.4:43231 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000175001s
	[INFO] 10.244.0.4:44271 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000126801s
	[INFO] 10.244.0.4:40974 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000333603s
	[INFO] 10.244.2.2:55045 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000310003s
	[INFO] 10.244.1.2:57077 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000200802s
	[INFO] 10.244.1.2:54114 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000141001s
	[INFO] 10.244.1.2:48087 - 5 "PTR IN 1.224.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000108701s
	
	
	==> coredns [83c48e535479] <==
	[INFO] 127.0.0.1:35778 - 43758 "HINFO IN 3852137065385310320.8835117782204073892. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.045677289s
	[INFO] 10.244.0.4:53905 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000542804s
	[INFO] 10.244.0.4:59238 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.183525039s
	[INFO] 10.244.0.4:55003 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.078953377s
	[INFO] 10.244.2.2:33889 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000114901s
	[INFO] 10.244.0.4:40720 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00262562s
	[INFO] 10.244.0.4:36444 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000183002s
	[INFO] 10.244.0.4:43113 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000161602s
	[INFO] 10.244.2.2:42033 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.014107207s
	[INFO] 10.244.2.2:47908 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000299002s
	[INFO] 10.244.2.2:59253 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000158201s
	[INFO] 10.244.2.2:46148 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000095601s
	[INFO] 10.244.1.2:60723 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000141601s
	[INFO] 10.244.1.2:50356 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000665s
	[INFO] 10.244.0.4:43623 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000168401s
	[INFO] 10.244.0.4:57113 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000140402s
	[INFO] 10.244.2.2:36171 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000189301s
	[INFO] 10.244.2.2:58671 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000148802s
	[INFO] 10.244.1.2:51248 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000081201s
	[INFO] 10.244.1.2:33225 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000163701s
	[INFO] 10.244.0.4:35196 - 5 "PTR IN 1.224.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000162201s
	[INFO] 10.244.2.2:60165 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000247803s
	[INFO] 10.244.2.2:60957 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000109001s
	[INFO] 10.244.2.2:45736 - 5 "PTR IN 1.224.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000135401s
	[INFO] 10.244.1.2:52909 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000169902s
	
	
	==> describe nodes <==
	Name:               ha-766300
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-766300
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e
	                    minikube.k8s.io/name=ha-766300
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_07T18_34_45_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 07 Aug 2024 18:34:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-766300
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 07 Aug 2024 18:45:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 07 Aug 2024 18:44:14 +0000   Wed, 07 Aug 2024 18:34:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 07 Aug 2024 18:44:14 +0000   Wed, 07 Aug 2024 18:34:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 07 Aug 2024 18:44:14 +0000   Wed, 07 Aug 2024 18:34:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 07 Aug 2024 18:44:14 +0000   Wed, 07 Aug 2024 18:35:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.28.224.88
	  Hostname:    ha-766300
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 c5317959630842a6b7e0aa3810fe4295
	  System UUID:                5346e03b-026b-e04b-9201-e5a67ac4a16c
	  Boot ID:                    cac6f773-e394-492a-baf0-e6da55bb7dc7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-bjlr2              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         67s
	  kube-system                 coredns-7db6d8ff4d-9tjv6             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                 coredns-7db6d8ff4d-fqjwg             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                 etcd-ha-766300                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-scfzz                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-apiserver-ha-766300             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-ha-766300    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-d6ckx                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-ha-766300             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-vip-ha-766300                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 10m                kube-proxy       
	  Normal  NodeHasSufficientPID     10m (x5 over 10m)  kubelet          Node ha-766300 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x5 over 10m)  kubelet          Node ha-766300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x5 over 10m)  kubelet          Node ha-766300 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m                kubelet          Node ha-766300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                kubelet          Node ha-766300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m                kubelet          Node ha-766300 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           10m                node-controller  Node ha-766300 event: Registered Node ha-766300 in Controller
	  Normal  NodeReady                9m54s              kubelet          Node ha-766300 status is now: NodeReady
	  Normal  RegisteredNode           6m6s               node-controller  Node ha-766300 event: Registered Node ha-766300 in Controller
	  Normal  RegisteredNode           117s               node-controller  Node ha-766300 event: Registered Node ha-766300 in Controller
	
	
	Name:               ha-766300-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-766300-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e
	                    minikube.k8s.io/name=ha-766300
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_07T18_38_50_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 07 Aug 2024 18:38:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-766300-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 07 Aug 2024 18:45:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 07 Aug 2024 18:44:23 +0000   Wed, 07 Aug 2024 18:38:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 07 Aug 2024 18:44:23 +0000   Wed, 07 Aug 2024 18:38:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 07 Aug 2024 18:44:23 +0000   Wed, 07 Aug 2024 18:38:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 07 Aug 2024 18:44:23 +0000   Wed, 07 Aug 2024 18:39:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.28.238.183
	  Hostname:    ha-766300-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 36f054ad468f42ab970f742479c45f7a
	  System UUID:                42dafca7-5b82-6143-bac4-f9c62f25a264
	  Boot ID:                    1fa0c76a-003c-4aaa-93e8-84f4d372b400
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-wf2xw                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         67s
	  kube-system                 etcd-ha-766300-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m22s
	  kube-system                 kindnet-gh6wt                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m26s
	  kube-system                 kube-apiserver-ha-766300-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m23s
	  kube-system                 kube-controller-manager-ha-766300-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m22s
	  kube-system                 kube-proxy-8v6vm                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m26s
	  kube-system                 kube-scheduler-ha-766300-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m22s
	  kube-system                 kube-vip-ha-766300-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m20s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  6m26s (x8 over 6m26s)  kubelet          Node ha-766300-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m26s (x8 over 6m26s)  kubelet          Node ha-766300-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m26s (x7 over 6m26s)  kubelet          Node ha-766300-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m25s                  node-controller  Node ha-766300-m02 event: Registered Node ha-766300-m02 in Controller
	  Normal  RegisteredNode           6m6s                   node-controller  Node ha-766300-m02 event: Registered Node ha-766300-m02 in Controller
	  Normal  RegisteredNode           117s                   node-controller  Node ha-766300-m02 event: Registered Node ha-766300-m02 in Controller
	
	
	Name:               ha-766300-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-766300-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e
	                    minikube.k8s.io/name=ha-766300
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_07T18_42_58_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 07 Aug 2024 18:42:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-766300-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 07 Aug 2024 18:45:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 07 Aug 2024 18:44:23 +0000   Wed, 07 Aug 2024 18:42:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 07 Aug 2024 18:44:23 +0000   Wed, 07 Aug 2024 18:42:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 07 Aug 2024 18:44:23 +0000   Wed, 07 Aug 2024 18:42:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 07 Aug 2024 18:44:23 +0000   Wed, 07 Aug 2024 18:43:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.28.233.130
	  Hostname:    ha-766300-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 97728720ee544fcab7db4cd2bb62cd5d
	  System UUID:                f483d94a-ed8f-3149-ad04-955322a17cb0
	  Boot ID:                    b5cedbd0-2c12-4be8-a4d1-4f9d3be93238
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-vzv8c                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         67s
	  kube-system                 etcd-ha-766300-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         2m16s
	  kube-system                 kindnet-6dc82                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m21s
	  kube-system                 kube-apiserver-ha-766300-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m16s
	  kube-system                 kube-controller-manager-ha-766300-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m17s
	  kube-system                 kube-proxy-mlf2g                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m21s
	  kube-system                 kube-scheduler-ha-766300-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m16s
	  kube-system                 kube-vip-ha-766300-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m15s                  kube-proxy       
	  Normal  RegisteredNode           2m21s                  node-controller  Node ha-766300-m03 event: Registered Node ha-766300-m03 in Controller
	  Normal  NodeHasSufficientMemory  2m21s (x8 over 2m21s)  kubelet          Node ha-766300-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m21s (x8 over 2m21s)  kubelet          Node ha-766300-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m21s (x7 over 2m21s)  kubelet          Node ha-766300-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m20s                  node-controller  Node ha-766300-m03 event: Registered Node ha-766300-m03 in Controller
	  Normal  RegisteredNode           117s                   node-controller  Node ha-766300-m03 event: Registered Node ha-766300-m03 in Controller
	
	
	==> dmesg <==
	[  +7.245015] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000010] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Aug 7 18:33] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.176570] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[Aug 7 18:34] systemd-fstab-generator[997]: Ignoring "noauto" option for root device
	[  +0.105023] kauditd_printk_skb: 65 callbacks suppressed
	[  +0.559767] systemd-fstab-generator[1036]: Ignoring "noauto" option for root device
	[  +0.197671] systemd-fstab-generator[1048]: Ignoring "noauto" option for root device
	[  +0.247347] systemd-fstab-generator[1062]: Ignoring "noauto" option for root device
	[  +2.899254] systemd-fstab-generator[1275]: Ignoring "noauto" option for root device
	[  +0.196827] systemd-fstab-generator[1287]: Ignoring "noauto" option for root device
	[  +0.207142] systemd-fstab-generator[1299]: Ignoring "noauto" option for root device
	[  +0.273415] systemd-fstab-generator[1314]: Ignoring "noauto" option for root device
	[ +12.041424] systemd-fstab-generator[1417]: Ignoring "noauto" option for root device
	[  +0.121430] kauditd_printk_skb: 202 callbacks suppressed
	[  +3.843613] systemd-fstab-generator[1676]: Ignoring "noauto" option for root device
	[  +6.261851] systemd-fstab-generator[1876]: Ignoring "noauto" option for root device
	[  +0.111987] kauditd_printk_skb: 70 callbacks suppressed
	[  +5.516703] kauditd_printk_skb: 67 callbacks suppressed
	[  +3.550326] systemd-fstab-generator[2372]: Ignoring "noauto" option for root device
	[ +15.057245] kauditd_printk_skb: 17 callbacks suppressed
	[Aug 7 18:35] kauditd_printk_skb: 29 callbacks suppressed
	[Aug 7 18:38] kauditd_printk_skb: 26 callbacks suppressed
	[  +2.160193] hrtimer: interrupt took 2279434 ns
	
	
	==> etcd [193edd22f66f] <==
	{"level":"info","ts":"2024-08-07T18:42:53.632799Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"c907e0b07277d2d0","remote-peer-id":"be852c5e1a2772b3"}
	{"level":"info","ts":"2024-08-07T18:42:53.633227Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"c907e0b07277d2d0","remote-peer-id":"be852c5e1a2772b3"}
	{"level":"info","ts":"2024-08-07T18:42:53.657705Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"c907e0b07277d2d0","to":"be852c5e1a2772b3","stream-type":"stream Message"}
	{"level":"info","ts":"2024-08-07T18:42:53.658621Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"c907e0b07277d2d0","remote-peer-id":"be852c5e1a2772b3"}
	{"level":"info","ts":"2024-08-07T18:42:53.697728Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"c907e0b07277d2d0","to":"be852c5e1a2772b3","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-08-07T18:42:53.698142Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"c907e0b07277d2d0","remote-peer-id":"be852c5e1a2772b3"}
	{"level":"warn","ts":"2024-08-07T18:42:53.707024Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"172.28.233.130:59160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2024-08-07T18:42:54.159311Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"be852c5e1a2772b3","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"warn","ts":"2024-08-07T18:42:55.159306Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"be852c5e1a2772b3","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"warn","ts":"2024-08-07T18:42:56.159392Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"be852c5e1a2772b3","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"info","ts":"2024-08-07T18:42:56.78557Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c907e0b07277d2d0 switched to configuration voters=(1435338394422264370 13728427821786165939 14485793774899811024)"}
	{"level":"info","ts":"2024-08-07T18:42:56.786274Z","caller":"membership/cluster.go:535","msg":"promote member","cluster-id":"e9ee9dc93e0bfaba","local-member-id":"c907e0b07277d2d0"}
	{"level":"info","ts":"2024-08-07T18:42:56.786324Z","caller":"etcdserver/server.go:1946","msg":"applied a configuration change through raft","local-member-id":"c907e0b07277d2d0","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"be852c5e1a2772b3"}
	{"level":"warn","ts":"2024-08-07T18:43:02.472013Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"be852c5e1a2772b3","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"19.512041ms"}
	{"level":"warn","ts":"2024-08-07T18:43:02.472333Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"13eb58aa3c04c232","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"19.834161ms"}
	{"level":"info","ts":"2024-08-07T18:43:02.498434Z","caller":"traceutil/trace.go:171","msg":"trace[1999066324] linearizableReadLoop","detail":"{readStateIndex:1790; appliedIndex:1790; }","duration":"170.144173ms","start":"2024-08-07T18:43:02.328275Z","end":"2024-08-07T18:43:02.498419Z","steps":["trace[1999066324] 'read index received'  (duration: 170.139873ms)","trace[1999066324] 'applied index is now lower than readState.Index'  (duration: 3.2µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-07T18:43:02.498695Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"170.441692ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1110"}
	{"level":"info","ts":"2024-08-07T18:43:02.498749Z","caller":"traceutil/trace.go:171","msg":"trace[1241968793] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1599; }","duration":"170.5765ms","start":"2024-08-07T18:43:02.328163Z","end":"2024-08-07T18:43:02.498739Z","steps":["trace[1241968793] 'agreement among raft nodes before linearized reading'  (duration: 170.328685ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-07T18:43:02.499965Z","caller":"traceutil/trace.go:171","msg":"trace[1395059899] transaction","detail":"{read_only:false; response_revision:1600; number_of_response:1; }","duration":"246.580824ms","start":"2024-08-07T18:43:02.253375Z","end":"2024-08-07T18:43:02.499956Z","steps":["trace[1395059899] 'process raft request'  (duration: 246.186499ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-07T18:43:02.500497Z","caller":"traceutil/trace.go:171","msg":"trace[1155220119] transaction","detail":"{read_only:false; response_revision:1601; number_of_response:1; }","duration":"215.679003ms","start":"2024-08-07T18:43:02.284807Z","end":"2024-08-07T18:43:02.500486Z","steps":["trace[1155220119] 'process raft request'  (duration: 214.82955ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-07T18:44:04.932932Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.789656ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/busybox-fc5497c4f-vzv8c\" ","response":"range_response_count:1 size:1813"}
	{"level":"info","ts":"2024-08-07T18:44:04.941912Z","caller":"traceutil/trace.go:171","msg":"trace[1145859393] range","detail":"{range_begin:/registry/pods/default/busybox-fc5497c4f-vzv8c; range_end:; response_count:1; response_revision:1794; }","duration":"121.774618ms","start":"2024-08-07T18:44:04.82012Z","end":"2024-08-07T18:44:04.941894Z","steps":["trace[1145859393] 'agreement among raft nodes before linearized reading'  (duration: 104.210519ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-07T18:44:38.126124Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1066}
	{"level":"info","ts":"2024-08-07T18:44:38.195239Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1066,"took":"68.379594ms","hash":4008549328,"current-db-size-bytes":3657728,"current-db-size":"3.7 MB","current-db-size-in-use-bytes":2129920,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2024-08-07T18:44:38.195299Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4008549328,"revision":1066,"compact-revision":-1}
	
	
	==> kernel <==
	 18:45:11 up 12 min,  0 users,  load average: 0.72, 0.88, 0.54
	Linux ha-766300 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [da03949685ff] <==
	I0807 18:44:27.017612       1 main.go:322] Node ha-766300-m03 has CIDR [10.244.2.0/24] 
	I0807 18:44:37.023621       1 main.go:295] Handling node with IPs: map[172.28.224.88:{}]
	I0807 18:44:37.023740       1 main.go:299] handling current node
	I0807 18:44:37.023760       1 main.go:295] Handling node with IPs: map[172.28.238.183:{}]
	I0807 18:44:37.023767       1 main.go:322] Node ha-766300-m02 has CIDR [10.244.1.0/24] 
	I0807 18:44:37.024311       1 main.go:295] Handling node with IPs: map[172.28.233.130:{}]
	I0807 18:44:37.024341       1 main.go:322] Node ha-766300-m03 has CIDR [10.244.2.0/24] 
	I0807 18:44:47.017173       1 main.go:295] Handling node with IPs: map[172.28.224.88:{}]
	I0807 18:44:47.017275       1 main.go:299] handling current node
	I0807 18:44:47.017313       1 main.go:295] Handling node with IPs: map[172.28.238.183:{}]
	I0807 18:44:47.017321       1 main.go:322] Node ha-766300-m02 has CIDR [10.244.1.0/24] 
	I0807 18:44:47.017766       1 main.go:295] Handling node with IPs: map[172.28.233.130:{}]
	I0807 18:44:47.017867       1 main.go:322] Node ha-766300-m03 has CIDR [10.244.2.0/24] 
	I0807 18:44:57.017209       1 main.go:295] Handling node with IPs: map[172.28.233.130:{}]
	I0807 18:44:57.017338       1 main.go:322] Node ha-766300-m03 has CIDR [10.244.2.0/24] 
	I0807 18:44:57.017636       1 main.go:295] Handling node with IPs: map[172.28.224.88:{}]
	I0807 18:44:57.017674       1 main.go:299] handling current node
	I0807 18:44:57.017691       1 main.go:295] Handling node with IPs: map[172.28.238.183:{}]
	I0807 18:44:57.017698       1 main.go:322] Node ha-766300-m02 has CIDR [10.244.1.0/24] 
	I0807 18:45:07.014883       1 main.go:295] Handling node with IPs: map[172.28.224.88:{}]
	I0807 18:45:07.014985       1 main.go:299] handling current node
	I0807 18:45:07.015007       1 main.go:295] Handling node with IPs: map[172.28.238.183:{}]
	I0807 18:45:07.015014       1 main.go:322] Node ha-766300-m02 has CIDR [10.244.1.0/24] 
	I0807 18:45:07.015525       1 main.go:295] Handling node with IPs: map[172.28.233.130:{}]
	I0807 18:45:07.015618       1 main.go:322] Node ha-766300-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [507c64bcc82f] <==
	I0807 18:34:43.803550       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0807 18:34:43.906468       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0807 18:34:43.956776       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0807 18:34:57.488851       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0807 18:34:57.771335       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0807 18:42:51.779639       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 11µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0807 18:42:51.779654       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0807 18:42:51.816625       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0807 18:42:51.865230       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0807 18:42:51.865697       1 timeout.go:142] post-timeout activity - time-elapsed: 152.032735ms, PATCH "/api/v1/namespaces/default/events/ha-766300-m03.17e986754b262b80" result: <nil>
	E0807 18:44:11.432809       1 conn.go:339] Error on socket receive: read tcp 172.28.239.254:8443->172.28.224.1:50986: use of closed network connection
	E0807 18:44:11.993184       1 conn.go:339] Error on socket receive: read tcp 172.28.239.254:8443->172.28.224.1:50988: use of closed network connection
	E0807 18:44:12.680856       1 conn.go:339] Error on socket receive: read tcp 172.28.239.254:8443->172.28.224.1:50990: use of closed network connection
	E0807 18:44:13.256191       1 conn.go:339] Error on socket receive: read tcp 172.28.239.254:8443->172.28.224.1:50992: use of closed network connection
	E0807 18:44:13.800993       1 conn.go:339] Error on socket receive: read tcp 172.28.239.254:8443->172.28.224.1:50994: use of closed network connection
	E0807 18:44:14.378819       1 conn.go:339] Error on socket receive: read tcp 172.28.239.254:8443->172.28.224.1:50996: use of closed network connection
	E0807 18:44:14.906373       1 conn.go:339] Error on socket receive: read tcp 172.28.239.254:8443->172.28.224.1:50998: use of closed network connection
	E0807 18:44:15.449732       1 conn.go:339] Error on socket receive: read tcp 172.28.239.254:8443->172.28.224.1:51000: use of closed network connection
	E0807 18:44:15.969673       1 conn.go:339] Error on socket receive: read tcp 172.28.239.254:8443->172.28.224.1:51002: use of closed network connection
	E0807 18:44:16.936153       1 conn.go:339] Error on socket receive: read tcp 172.28.239.254:8443->172.28.224.1:51005: use of closed network connection
	E0807 18:44:27.499343       1 conn.go:339] Error on socket receive: read tcp 172.28.239.254:8443->172.28.224.1:51007: use of closed network connection
	E0807 18:44:28.013415       1 conn.go:339] Error on socket receive: read tcp 172.28.239.254:8443->172.28.224.1:51010: use of closed network connection
	E0807 18:44:38.557430       1 conn.go:339] Error on socket receive: read tcp 172.28.239.254:8443->172.28.224.1:51012: use of closed network connection
	E0807 18:44:39.054033       1 conn.go:339] Error on socket receive: read tcp 172.28.239.254:8443->172.28.224.1:51015: use of closed network connection
	E0807 18:44:49.561313       1 conn.go:339] Error on socket receive: read tcp 172.28.239.254:8443->172.28.224.1:51017: use of closed network connection
	
	
	==> kube-controller-manager [f0640929d8e2] <==
	I0807 18:35:19.752502       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="80.704µs"
	I0807 18:35:19.845417       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="33.27344ms"
	I0807 18:35:19.847043       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="65.104µs"
	I0807 18:35:19.901941       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="41.332562ms"
	I0807 18:35:19.904332       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="313.516µs"
	I0807 18:35:21.806687       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0807 18:38:45.681167       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-766300-m02\" does not exist"
	I0807 18:38:45.729734       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-766300-m02" podCIDRs=["10.244.1.0/24"]
	I0807 18:38:46.848512       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-766300-m02"
	I0807 18:42:50.875914       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-766300-m03\" does not exist"
	I0807 18:42:50.909950       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-766300-m03" podCIDRs=["10.244.2.0/24"]
	I0807 18:42:51.901007       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-766300-m03"
	I0807 18:44:04.835353       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="155.490627ms"
	I0807 18:44:05.117822       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="282.414968ms"
	I0807 18:44:05.361890       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="243.994865ms"
	I0807 18:44:05.415503       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.478621ms"
	I0807 18:44:05.416025       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="460.629µs"
	I0807 18:44:05.743993       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="189.25844ms"
	I0807 18:44:05.744565       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="509.732µs"
	I0807 18:44:08.240855       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="20.057742ms"
	I0807 18:44:08.241725       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="32.5µs"
	I0807 18:44:08.447525       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="99.075803ms"
	I0807 18:44:08.448231       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="530.804µs"
	I0807 18:44:08.578645       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="98.590198ms"
	I0807 18:44:08.579433       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="170.301µs"
	
	
	==> kube-proxy [0d1a15c98c83] <==
	I0807 18:34:58.963339       1 server_linux.go:69] "Using iptables proxy"
	I0807 18:34:58.980292       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.28.224.88"]
	I0807 18:34:59.061540       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0807 18:34:59.061693       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0807 18:34:59.061754       1 server_linux.go:165] "Using iptables Proxier"
	I0807 18:34:59.065726       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0807 18:34:59.066407       1 server.go:872] "Version info" version="v1.30.3"
	I0807 18:34:59.066519       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0807 18:34:59.067988       1 config.go:192] "Starting service config controller"
	I0807 18:34:59.068028       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0807 18:34:59.068121       1 config.go:101] "Starting endpoint slice config controller"
	I0807 18:34:59.068133       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0807 18:34:59.068808       1 config.go:319] "Starting node config controller"
	I0807 18:34:59.068844       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0807 18:34:59.169255       1 shared_informer.go:320] Caches are synced for node config
	I0807 18:34:59.169317       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0807 18:34:59.169291       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [a64900197578] <==
	W0807 18:34:41.635483       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0807 18:34:41.635666       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0807 18:34:41.740898       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0807 18:34:41.740999       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0807 18:34:41.882748       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0807 18:34:41.883113       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0807 18:34:41.954032       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0807 18:34:41.954212       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0807 18:34:42.012056       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0807 18:34:42.012532       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0807 18:34:42.065921       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0807 18:34:42.065975       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0807 18:34:42.078139       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0807 18:34:42.078518       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0807 18:34:42.162830       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0807 18:34:42.162872       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0807 18:34:42.190521       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0807 18:34:42.190865       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0807 18:34:42.210057       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0807 18:34:42.210377       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0807 18:34:44.207344       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0807 18:44:04.856049       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-bjlr2\": pod busybox-fc5497c4f-bjlr2 is already assigned to node \"ha-766300\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-bjlr2" node="ha-766300"
	E0807 18:44:04.858389       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod a2c15ee6-19fe-4744-8b8e-419dcae7ca05(default/busybox-fc5497c4f-bjlr2) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-bjlr2"
	E0807 18:44:04.858968       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-bjlr2\": pod busybox-fc5497c4f-bjlr2 is already assigned to node \"ha-766300\"" pod="default/busybox-fc5497c4f-bjlr2"
	I0807 18:44:04.859186       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-bjlr2" node="ha-766300"
	
	
	==> kubelet <==
	Aug 07 18:40:43 ha-766300 kubelet[2378]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 07 18:40:43 ha-766300 kubelet[2378]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 07 18:41:43 ha-766300 kubelet[2378]: E0807 18:41:43.931943    2378 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 07 18:41:43 ha-766300 kubelet[2378]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 07 18:41:43 ha-766300 kubelet[2378]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 07 18:41:43 ha-766300 kubelet[2378]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 07 18:41:43 ha-766300 kubelet[2378]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 07 18:42:43 ha-766300 kubelet[2378]: E0807 18:42:43.932458    2378 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 07 18:42:43 ha-766300 kubelet[2378]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 07 18:42:43 ha-766300 kubelet[2378]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 07 18:42:43 ha-766300 kubelet[2378]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 07 18:42:43 ha-766300 kubelet[2378]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 07 18:43:43 ha-766300 kubelet[2378]: E0807 18:43:43.928787    2378 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 07 18:43:43 ha-766300 kubelet[2378]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 07 18:43:43 ha-766300 kubelet[2378]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 07 18:43:43 ha-766300 kubelet[2378]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 07 18:43:43 ha-766300 kubelet[2378]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 07 18:44:04 ha-766300 kubelet[2378]: I0807 18:44:04.839711    2378 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=539.839605324 podStartE2EDuration="8m59.839605324s" podCreationTimestamp="2024-08-07 18:35:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-07 18:35:19.903827573 +0000 UTC m=+36.264308100" watchObservedRunningTime="2024-08-07 18:44:04.839605324 +0000 UTC m=+561.200085751"
	Aug 07 18:44:04 ha-766300 kubelet[2378]: I0807 18:44:04.841752    2378 topology_manager.go:215] "Topology Admit Handler" podUID="a2c15ee6-19fe-4744-8b8e-419dcae7ca05" podNamespace="default" podName="busybox-fc5497c4f-bjlr2"
	Aug 07 18:44:04 ha-766300 kubelet[2378]: I0807 18:44:04.969890    2378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6m4rl\" (UniqueName: \"kubernetes.io/projected/a2c15ee6-19fe-4744-8b8e-419dcae7ca05-kube-api-access-6m4rl\") pod \"busybox-fc5497c4f-bjlr2\" (UID: \"a2c15ee6-19fe-4744-8b8e-419dcae7ca05\") " pod="default/busybox-fc5497c4f-bjlr2"
	Aug 07 18:44:43 ha-766300 kubelet[2378]: E0807 18:44:43.928952    2378 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 07 18:44:43 ha-766300 kubelet[2378]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 07 18:44:43 ha-766300 kubelet[2378]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 07 18:44:43 ha-766300 kubelet[2378]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 07 18:44:43 ha-766300 kubelet[2378]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0807 18:45:02.643813    8408 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-766300 -n ha-766300
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-766300 -n ha-766300: (13.1930466s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-766300 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (71.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (694.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-766300 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-windows-amd64.exe -p ha-766300 status --output json -v=7 --alsologtostderr: (51.0325928s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-766300 cp testdata\cp-test.txt ha-766300:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-766300 cp testdata\cp-test.txt ha-766300:/home/docker/cp-test.txt: (10.1516198s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-766300 ssh -n ha-766300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-766300 ssh -n ha-766300 "sudo cat /home/docker/cp-test.txt": (9.9763095s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-766300 cp ha-766300:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile408936721\001\cp-test_ha-766300.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-766300 cp ha-766300:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile408936721\001\cp-test_ha-766300.txt: (10.1484702s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-766300 ssh -n ha-766300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-766300 ssh -n ha-766300 "sudo cat /home/docker/cp-test.txt": (10.1772421s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-766300 cp ha-766300:/home/docker/cp-test.txt ha-766300-m02:/home/docker/cp-test_ha-766300_ha-766300-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-766300 cp ha-766300:/home/docker/cp-test.txt ha-766300-m02:/home/docker/cp-test_ha-766300_ha-766300-m02.txt: (17.4687763s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-766300 ssh -n ha-766300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-766300 ssh -n ha-766300 "sudo cat /home/docker/cp-test.txt": (10.0499545s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-766300 ssh -n ha-766300-m02 "sudo cat /home/docker/cp-test_ha-766300_ha-766300-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-766300 ssh -n ha-766300-m02 "sudo cat /home/docker/cp-test_ha-766300_ha-766300-m02.txt": (9.9807223s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-766300 cp ha-766300:/home/docker/cp-test.txt ha-766300-m03:/home/docker/cp-test_ha-766300_ha-766300-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-766300 cp ha-766300:/home/docker/cp-test.txt ha-766300-m03:/home/docker/cp-test_ha-766300_ha-766300-m03.txt: (17.7152635s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-766300 ssh -n ha-766300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-766300 ssh -n ha-766300 "sudo cat /home/docker/cp-test.txt": (10.0474895s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-766300 ssh -n ha-766300-m03 "sudo cat /home/docker/cp-test_ha-766300_ha-766300-m03.txt"
E0807 18:53:20.504006    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-766300 ssh -n ha-766300-m03 "sudo cat /home/docker/cp-test_ha-766300_ha-766300-m03.txt": (10.0659822s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-766300 cp ha-766300:/home/docker/cp-test.txt ha-766300-m04:/home/docker/cp-test_ha-766300_ha-766300-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-766300 cp ha-766300:/home/docker/cp-test.txt ha-766300-m04:/home/docker/cp-test_ha-766300_ha-766300-m04.txt: (17.4731219s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-766300 ssh -n ha-766300 "sudo cat /home/docker/cp-test.txt"
E0807 18:53:38.116236    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-100700\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-766300 ssh -n ha-766300 "sudo cat /home/docker/cp-test.txt": (10.2030684s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-766300 ssh -n ha-766300-m04 "sudo cat /home/docker/cp-test_ha-766300_ha-766300-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-766300 ssh -n ha-766300-m04 "sudo cat /home/docker/cp-test_ha-766300_ha-766300-m04.txt": (10.0826799s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-766300 cp testdata\cp-test.txt ha-766300-m02:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-766300 cp testdata\cp-test.txt ha-766300-m02:/home/docker/cp-test.txt: (10.4128547s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-766300 ssh -n ha-766300-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-766300 ssh -n ha-766300-m02 "sudo cat /home/docker/cp-test.txt": (10.5625479s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-766300 cp ha-766300-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile408936721\001\cp-test_ha-766300-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-766300 cp ha-766300-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile408936721\001\cp-test_ha-766300-m02.txt: (10.5215169s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-766300 ssh -n ha-766300-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-766300 ssh -n ha-766300-m02 "sudo cat /home/docker/cp-test.txt": (10.488534s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-766300 cp ha-766300-m02:/home/docker/cp-test.txt ha-766300:/home/docker/cp-test_ha-766300-m02_ha-766300.txt
E0807 18:54:43.687416    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\client.crt: The system cannot find the path specified.
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-766300 cp ha-766300-m02:/home/docker/cp-test.txt ha-766300:/home/docker/cp-test_ha-766300-m02_ha-766300.txt: (17.8427139s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-766300 ssh -n ha-766300-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-766300 ssh -n ha-766300-m02 "sudo cat /home/docker/cp-test.txt": (10.1905623s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-766300 ssh -n ha-766300 "sudo cat /home/docker/cp-test_ha-766300-m02_ha-766300.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-766300 ssh -n ha-766300 "sudo cat /home/docker/cp-test_ha-766300-m02_ha-766300.txt": (10.4224399s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-766300 cp ha-766300-m02:/home/docker/cp-test.txt ha-766300-m03:/home/docker/cp-test_ha-766300-m02_ha-766300-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-766300 cp ha-766300-m02:/home/docker/cp-test.txt ha-766300-m03:/home/docker/cp-test_ha-766300-m02_ha-766300-m03.txt: (18.0766596s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-766300 ssh -n ha-766300-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-766300 ssh -n ha-766300-m02 "sudo cat /home/docker/cp-test.txt": (10.2966183s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-766300 ssh -n ha-766300-m03 "sudo cat /home/docker/cp-test_ha-766300-m02_ha-766300-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-766300 ssh -n ha-766300-m03 "sudo cat /home/docker/cp-test_ha-766300-m02_ha-766300-m03.txt": (10.2167392s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-766300 cp ha-766300-m02:/home/docker/cp-test.txt ha-766300-m04:/home/docker/cp-test_ha-766300-m02_ha-766300-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-766300 cp ha-766300-m02:/home/docker/cp-test.txt ha-766300-m04:/home/docker/cp-test_ha-766300-m02_ha-766300-m04.txt: (17.8267288s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-766300 ssh -n ha-766300-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-766300 ssh -n ha-766300-m02 "sudo cat /home/docker/cp-test.txt": (10.3720818s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-766300 ssh -n ha-766300-m04 "sudo cat /home/docker/cp-test_ha-766300-m02_ha-766300-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-766300 ssh -n ha-766300-m04 "sudo cat /home/docker/cp-test_ha-766300-m02_ha-766300-m04.txt": (10.3248171s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-766300 cp testdata\cp-test.txt ha-766300-m03:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-766300 cp testdata\cp-test.txt ha-766300-m03:/home/docker/cp-test.txt: (10.3072865s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-766300 ssh -n ha-766300-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-766300 ssh -n ha-766300-m03 "sudo cat /home/docker/cp-test.txt": (10.224171s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-766300 cp ha-766300-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile408936721\001\cp-test_ha-766300-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-766300 cp ha-766300-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile408936721\001\cp-test_ha-766300-m03.txt: (10.3198585s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-766300 ssh -n ha-766300-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-766300 ssh -n ha-766300-m03 "sudo cat /home/docker/cp-test.txt": (10.2289074s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-766300 cp ha-766300-m03:/home/docker/cp-test.txt ha-766300:/home/docker/cp-test_ha-766300-m03_ha-766300.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-766300 cp ha-766300-m03:/home/docker/cp-test.txt ha-766300:/home/docker/cp-test_ha-766300-m03_ha-766300.txt: (17.7664603s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-766300 ssh -n ha-766300-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-766300 ssh -n ha-766300-m03 "sudo cat /home/docker/cp-test.txt": (10.1890952s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-766300 ssh -n ha-766300 "sudo cat /home/docker/cp-test_ha-766300-m03_ha-766300.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-766300 ssh -n ha-766300 "sudo cat /home/docker/cp-test_ha-766300-m03_ha-766300.txt": (10.2095759s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-766300 cp ha-766300-m03:/home/docker/cp-test.txt ha-766300-m02:/home/docker/cp-test_ha-766300-m03_ha-766300-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-766300 cp ha-766300-m03:/home/docker/cp-test.txt ha-766300-m02:/home/docker/cp-test_ha-766300-m03_ha-766300-m02.txt: (17.716967s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-766300 ssh -n ha-766300-m03 "sudo cat /home/docker/cp-test.txt"
E0807 18:58:20.507899    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-766300 ssh -n ha-766300-m03 "sudo cat /home/docker/cp-test.txt": (10.2183046s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-766300 ssh -n ha-766300-m02 "sudo cat /home/docker/cp-test_ha-766300-m03_ha-766300-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-766300 ssh -n ha-766300-m02 "sudo cat /home/docker/cp-test_ha-766300-m03_ha-766300-m02.txt": (10.2645562s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-766300 cp ha-766300-m03:/home/docker/cp-test.txt ha-766300-m04:/home/docker/cp-test_ha-766300-m03_ha-766300-m04.txt
E0807 18:58:38.126690    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-100700\client.crt: The system cannot find the path specified.
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-766300 cp ha-766300-m03:/home/docker/cp-test.txt ha-766300-m04:/home/docker/cp-test_ha-766300-m03_ha-766300-m04.txt: (17.6442692s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-766300 ssh -n ha-766300-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-766300 ssh -n ha-766300-m03 "sudo cat /home/docker/cp-test.txt": (10.1821046s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-766300 ssh -n ha-766300-m04 "sudo cat /home/docker/cp-test_ha-766300-m03_ha-766300-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-766300 ssh -n ha-766300-m04 "sudo cat /home/docker/cp-test_ha-766300-m03_ha-766300-m04.txt": (10.1542573s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-766300 cp testdata\cp-test.txt ha-766300-m04:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-766300 cp testdata\cp-test.txt ha-766300-m04:/home/docker/cp-test.txt: (10.1611642s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-766300 ssh -n ha-766300-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-766300 ssh -n ha-766300-m04 "sudo cat /home/docker/cp-test.txt": (10.1772386s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-766300 cp ha-766300-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile408936721\001\cp-test_ha-766300-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-766300 cp ha-766300-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile408936721\001\cp-test_ha-766300-m04.txt: (10.126591s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-766300 ssh -n ha-766300-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-766300 ssh -n ha-766300-m04 "sudo cat /home/docker/cp-test.txt": (10.0230988s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-766300 cp ha-766300-m04:/home/docker/cp-test.txt ha-766300:/home/docker/cp-test_ha-766300-m04_ha-766300.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-766300 cp ha-766300-m04:/home/docker/cp-test.txt ha-766300:/home/docker/cp-test_ha-766300-m04_ha-766300.txt: (17.7231305s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-766300 ssh -n ha-766300-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-766300 ssh -n ha-766300-m04 "sudo cat /home/docker/cp-test.txt": (10.1758167s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-766300 ssh -n ha-766300 "sudo cat /home/docker/cp-test_ha-766300-m04_ha-766300.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-766300 ssh -n ha-766300 "sudo cat /home/docker/cp-test_ha-766300-m04_ha-766300.txt": (10.3931526s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-766300 cp ha-766300-m04:/home/docker/cp-test.txt ha-766300-m02:/home/docker/cp-test_ha-766300-m04_ha-766300-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-766300 cp ha-766300-m04:/home/docker/cp-test.txt ha-766300-m02:/home/docker/cp-test_ha-766300-m04_ha-766300-m02.txt: (17.9818652s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-766300 ssh -n ha-766300-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-766300 ssh -n ha-766300-m04 "sudo cat /home/docker/cp-test.txt": (10.2431595s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-766300 ssh -n ha-766300-m02 "sudo cat /home/docker/cp-test_ha-766300-m04_ha-766300-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-766300 ssh -n ha-766300-m02 "sudo cat /home/docker/cp-test_ha-766300-m04_ha-766300-m02.txt": (10.0307852s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-766300 cp ha-766300-m04:/home/docker/cp-test.txt ha-766300-m03:/home/docker/cp-test_ha-766300-m04_ha-766300-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-766300 cp ha-766300-m04:/home/docker/cp-test.txt ha-766300-m03:/home/docker/cp-test_ha-766300-m04_ha-766300-m03.txt: (17.4794131s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-766300 ssh -n ha-766300-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-766300 ssh -n ha-766300-m04 "sudo cat /home/docker/cp-test.txt": exit status 1 (5.1168679s)

                                                
                                                
** stderr ** 
	W0807 19:01:26.002313   10600 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:536: failed to run command by deadline. exceeded timeout : out/minikube-windows-amd64.exe -p ha-766300 ssh -n ha-766300-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:539: failed to run an cp command. args "out/minikube-windows-amd64.exe -p ha-766300 ssh -n ha-766300-m04 \"sudo cat /home/docker/cp-test.txt\"" : exit status 1
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-766300 ssh -n ha-766300-m03 "sudo cat /home/docker/cp-test_ha-766300-m04_ha-766300-m03.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-766300 ssh -n ha-766300-m03 "sudo cat /home/docker/cp-test_ha-766300-m04_ha-766300-m03.txt": context deadline exceeded (0s)
helpers_test.go:536: failed to run command by deadline. exceeded timeout : out/minikube-windows-amd64.exe -p ha-766300 ssh -n ha-766300-m03 "sudo cat /home/docker/cp-test_ha-766300-m04_ha-766300-m03.txt"
helpers_test.go:539: failed to run an cp command. args "out/minikube-windows-amd64.exe -p ha-766300 ssh -n ha-766300-m03 \"sudo cat /home/docker/cp-test_ha-766300-m04_ha-766300-m03.txt\"" : context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-766300 -n ha-766300
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-766300 -n ha-766300: (13.1651262s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/CopyFile FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/CopyFile]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-766300 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-766300 logs -n 25: (9.3609138s)
helpers_test.go:252: TestMultiControlPlane/serial/CopyFile logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| Command |                                                           Args                                                           |  Profile  |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| cp      | ha-766300 cp testdata\cp-test.txt                                                                                        | ha-766300 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:56 UTC | 07 Aug 24 18:56 UTC |
	|         | ha-766300-m03:/home/docker/cp-test.txt                                                                                   |           |                   |         |                     |                     |
	| ssh     | ha-766300 ssh -n                                                                                                         | ha-766300 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:56 UTC | 07 Aug 24 18:56 UTC |
	|         | ha-766300-m03 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| cp      | ha-766300 cp ha-766300-m03:/home/docker/cp-test.txt                                                                      | ha-766300 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:56 UTC | 07 Aug 24 18:57 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile408936721\001\cp-test_ha-766300-m03.txt |           |                   |         |                     |                     |
	| ssh     | ha-766300 ssh -n                                                                                                         | ha-766300 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:57 UTC | 07 Aug 24 18:57 UTC |
	|         | ha-766300-m03 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| cp      | ha-766300 cp ha-766300-m03:/home/docker/cp-test.txt                                                                      | ha-766300 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:57 UTC | 07 Aug 24 18:57 UTC |
	|         | ha-766300:/home/docker/cp-test_ha-766300-m03_ha-766300.txt                                                               |           |                   |         |                     |                     |
	| ssh     | ha-766300 ssh -n                                                                                                         | ha-766300 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:57 UTC | 07 Aug 24 18:57 UTC |
	|         | ha-766300-m03 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| ssh     | ha-766300 ssh -n ha-766300 sudo cat                                                                                      | ha-766300 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:57 UTC | 07 Aug 24 18:57 UTC |
	|         | /home/docker/cp-test_ha-766300-m03_ha-766300.txt                                                                         |           |                   |         |                     |                     |
	| cp      | ha-766300 cp ha-766300-m03:/home/docker/cp-test.txt                                                                      | ha-766300 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:57 UTC | 07 Aug 24 18:58 UTC |
	|         | ha-766300-m02:/home/docker/cp-test_ha-766300-m03_ha-766300-m02.txt                                                       |           |                   |         |                     |                     |
	| ssh     | ha-766300 ssh -n                                                                                                         | ha-766300 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:58 UTC | 07 Aug 24 18:58 UTC |
	|         | ha-766300-m03 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| ssh     | ha-766300 ssh -n ha-766300-m02 sudo cat                                                                                  | ha-766300 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:58 UTC | 07 Aug 24 18:58 UTC |
	|         | /home/docker/cp-test_ha-766300-m03_ha-766300-m02.txt                                                                     |           |                   |         |                     |                     |
	| cp      | ha-766300 cp ha-766300-m03:/home/docker/cp-test.txt                                                                      | ha-766300 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:58 UTC | 07 Aug 24 18:58 UTC |
	|         | ha-766300-m04:/home/docker/cp-test_ha-766300-m03_ha-766300-m04.txt                                                       |           |                   |         |                     |                     |
	| ssh     | ha-766300 ssh -n                                                                                                         | ha-766300 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:58 UTC | 07 Aug 24 18:59 UTC |
	|         | ha-766300-m03 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| ssh     | ha-766300 ssh -n ha-766300-m04 sudo cat                                                                                  | ha-766300 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:59 UTC | 07 Aug 24 18:59 UTC |
	|         | /home/docker/cp-test_ha-766300-m03_ha-766300-m04.txt                                                                     |           |                   |         |                     |                     |
	| cp      | ha-766300 cp testdata\cp-test.txt                                                                                        | ha-766300 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:59 UTC | 07 Aug 24 18:59 UTC |
	|         | ha-766300-m04:/home/docker/cp-test.txt                                                                                   |           |                   |         |                     |                     |
	| ssh     | ha-766300 ssh -n                                                                                                         | ha-766300 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:59 UTC | 07 Aug 24 18:59 UTC |
	|         | ha-766300-m04 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| cp      | ha-766300 cp ha-766300-m04:/home/docker/cp-test.txt                                                                      | ha-766300 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:59 UTC | 07 Aug 24 18:59 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile408936721\001\cp-test_ha-766300-m04.txt |           |                   |         |                     |                     |
	| ssh     | ha-766300 ssh -n                                                                                                         | ha-766300 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:59 UTC | 07 Aug 24 18:59 UTC |
	|         | ha-766300-m04 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| cp      | ha-766300 cp ha-766300-m04:/home/docker/cp-test.txt                                                                      | ha-766300 | minikube6\jenkins | v1.33.1 | 07 Aug 24 18:59 UTC | 07 Aug 24 19:00 UTC |
	|         | ha-766300:/home/docker/cp-test_ha-766300-m04_ha-766300.txt                                                               |           |                   |         |                     |                     |
	| ssh     | ha-766300 ssh -n                                                                                                         | ha-766300 | minikube6\jenkins | v1.33.1 | 07 Aug 24 19:00 UTC | 07 Aug 24 19:00 UTC |
	|         | ha-766300-m04 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| ssh     | ha-766300 ssh -n ha-766300 sudo cat                                                                                      | ha-766300 | minikube6\jenkins | v1.33.1 | 07 Aug 24 19:00 UTC | 07 Aug 24 19:00 UTC |
	|         | /home/docker/cp-test_ha-766300-m04_ha-766300.txt                                                                         |           |                   |         |                     |                     |
	| cp      | ha-766300 cp ha-766300-m04:/home/docker/cp-test.txt                                                                      | ha-766300 | minikube6\jenkins | v1.33.1 | 07 Aug 24 19:00 UTC | 07 Aug 24 19:00 UTC |
	|         | ha-766300-m02:/home/docker/cp-test_ha-766300-m04_ha-766300-m02.txt                                                       |           |                   |         |                     |                     |
	| ssh     | ha-766300 ssh -n                                                                                                         | ha-766300 | minikube6\jenkins | v1.33.1 | 07 Aug 24 19:00 UTC | 07 Aug 24 19:00 UTC |
	|         | ha-766300-m04 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| ssh     | ha-766300 ssh -n ha-766300-m02 sudo cat                                                                                  | ha-766300 | minikube6\jenkins | v1.33.1 | 07 Aug 24 19:00 UTC | 07 Aug 24 19:01 UTC |
	|         | /home/docker/cp-test_ha-766300-m04_ha-766300-m02.txt                                                                     |           |                   |         |                     |                     |
	| cp      | ha-766300 cp ha-766300-m04:/home/docker/cp-test.txt                                                                      | ha-766300 | minikube6\jenkins | v1.33.1 | 07 Aug 24 19:01 UTC | 07 Aug 24 19:01 UTC |
	|         | ha-766300-m03:/home/docker/cp-test_ha-766300-m04_ha-766300-m03.txt                                                       |           |                   |         |                     |                     |
	| ssh     | ha-766300 ssh -n                                                                                                         | ha-766300 | minikube6\jenkins | v1.33.1 | 07 Aug 24 19:01 UTC |                     |
	|         | ha-766300-m04 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	|---------|--------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/07 18:31:31
	Running on machine: minikube6
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0807 18:31:31.156543   12940 out.go:291] Setting OutFile to fd 540 ...
	I0807 18:31:31.157550   12940 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 18:31:31.157550   12940 out.go:304] Setting ErrFile to fd 1388...
	I0807 18:31:31.157550   12940 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 18:31:31.182223   12940 out.go:298] Setting JSON to false
	I0807 18:31:31.184906   12940 start.go:129] hostinfo: {"hostname":"minikube6","uptime":317420,"bootTime":1722738070,"procs":189,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4717 Build 19045.4717","kernelVersion":"10.0.19045.4717 Build 19045.4717","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0807 18:31:31.184906   12940 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0807 18:31:31.191000   12940 out.go:177] * [ha-766300] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4717 Build 19045.4717
	I0807 18:31:31.198231   12940 notify.go:220] Checking for updates...
	I0807 18:31:31.198784   12940 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0807 18:31:31.202150   12940 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0807 18:31:31.205041   12940 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0807 18:31:31.208112   12940 out.go:177]   - MINIKUBE_LOCATION=19389
	I0807 18:31:31.210905   12940 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0807 18:31:31.214011   12940 driver.go:392] Setting default libvirt URI to qemu:///system
	I0807 18:31:36.661002   12940 out.go:177] * Using the hyperv driver based on user configuration
	I0807 18:31:36.665072   12940 start.go:297] selected driver: hyperv
	I0807 18:31:36.665072   12940 start.go:901] validating driver "hyperv" against <nil>
	I0807 18:31:36.665072   12940 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0807 18:31:36.710427   12940 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0807 18:31:36.710820   12940 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0807 18:31:36.710820   12940 cni.go:84] Creating CNI manager for ""
	I0807 18:31:36.710820   12940 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0807 18:31:36.710820   12940 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0807 18:31:36.710820   12940 start.go:340] cluster config:
	{Name:ha-766300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-766300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 18:31:36.711972   12940 iso.go:125] acquiring lock: {Name:mk51465eaa337f49a286b30986b5f3d5f63e6787 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 18:31:36.716381   12940 out.go:177] * Starting "ha-766300" primary control-plane node in "ha-766300" cluster
	I0807 18:31:36.720895   12940 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0807 18:31:36.721112   12940 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0807 18:31:36.721112   12940 cache.go:56] Caching tarball of preloaded images
	I0807 18:31:36.721605   12940 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0807 18:31:36.722009   12940 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0807 18:31:36.722701   12940 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\config.json ...
	I0807 18:31:36.722701   12940 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\config.json: {Name:mkd1789158757b6c59e145754941402c1d283541 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 18:31:36.723984   12940 start.go:360] acquireMachinesLock for ha-766300: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 18:31:36.723984   12940 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-766300"
	I0807 18:31:36.723984   12940 start.go:93] Provisioning new machine with config: &{Name:ha-766300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.3 ClusterName:ha-766300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0807 18:31:36.724594   12940 start.go:125] createHost starting for "" (driver="hyperv")
	I0807 18:31:36.728198   12940 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0807 18:31:36.729184   12940 start.go:159] libmachine.API.Create for "ha-766300" (driver="hyperv")
	I0807 18:31:36.729184   12940 client.go:168] LocalClient.Create starting
	I0807 18:31:36.729184   12940 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0807 18:31:36.729184   12940 main.go:141] libmachine: Decoding PEM data...
	I0807 18:31:36.729184   12940 main.go:141] libmachine: Parsing certificate...
	I0807 18:31:36.730520   12940 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0807 18:31:36.731040   12940 main.go:141] libmachine: Decoding PEM data...
	I0807 18:31:36.731108   12940 main.go:141] libmachine: Parsing certificate...
	I0807 18:31:36.731317   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0807 18:31:38.858067   12940 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0807 18:31:38.858067   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:31:38.858067   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0807 18:31:40.572678   12940 main.go:141] libmachine: [stdout =====>] : False
	
	I0807 18:31:40.572678   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:31:40.572918   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0807 18:31:42.099167   12940 main.go:141] libmachine: [stdout =====>] : True
	
	I0807 18:31:42.099427   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:31:42.099662   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0807 18:31:45.799162   12940 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0807 18:31:45.799162   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:31:45.802693   12940 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0807 18:31:46.294259   12940 main.go:141] libmachine: Creating SSH key...
	I0807 18:31:46.481137   12940 main.go:141] libmachine: Creating VM...
	I0807 18:31:46.482146   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0807 18:31:49.395155   12940 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0807 18:31:49.395663   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:31:49.395663   12940 main.go:141] libmachine: Using switch "Default Switch"
	I0807 18:31:49.395663   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0807 18:31:51.164524   12940 main.go:141] libmachine: [stdout =====>] : True
	
	I0807 18:31:51.164524   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:31:51.164524   12940 main.go:141] libmachine: Creating VHD
	I0807 18:31:51.164524   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-766300\fixed.vhd' -SizeBytes 10MB -Fixed
	I0807 18:31:54.998556   12940 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-766300\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 5FD98D4C-71F4-4FD4-915C-399CE8F6DEBE
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0807 18:31:54.998556   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:31:54.998556   12940 main.go:141] libmachine: Writing magic tar header
	I0807 18:31:54.998556   12940 main.go:141] libmachine: Writing SSH key tar header
	I0807 18:31:55.008940   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-766300\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-766300\disk.vhd' -VHDType Dynamic -DeleteSource
	I0807 18:31:58.352710   12940 main.go:141] libmachine: [stdout =====>] : 
	I0807 18:31:58.353419   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:31:58.353419   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-766300\disk.vhd' -SizeBytes 20000MB
	I0807 18:32:00.960255   12940 main.go:141] libmachine: [stdout =====>] : 
	I0807 18:32:00.960472   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:32:00.960582   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-766300 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-766300' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0807 18:32:04.721247   12940 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-766300 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0807 18:32:04.721247   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:32:04.721715   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-766300 -DynamicMemoryEnabled $false
	I0807 18:32:07.012924   12940 main.go:141] libmachine: [stdout =====>] : 
	I0807 18:32:07.013670   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:32:07.013767   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-766300 -Count 2
	I0807 18:32:09.261295   12940 main.go:141] libmachine: [stdout =====>] : 
	I0807 18:32:09.261814   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:32:09.261927   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-766300 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-766300\boot2docker.iso'
	I0807 18:32:11.944960   12940 main.go:141] libmachine: [stdout =====>] : 
	I0807 18:32:11.945505   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:32:11.945505   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-766300 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-766300\disk.vhd'
	I0807 18:32:14.663818   12940 main.go:141] libmachine: [stdout =====>] : 
	I0807 18:32:14.663818   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:32:14.663818   12940 main.go:141] libmachine: Starting VM...
	I0807 18:32:14.664422   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-766300
	I0807 18:32:17.860244   12940 main.go:141] libmachine: [stdout =====>] : 
	I0807 18:32:17.860489   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:32:17.860489   12940 main.go:141] libmachine: Waiting for host to start...
	I0807 18:32:17.860777   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300 ).state
	I0807 18:32:20.179792   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:32:20.179792   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:32:20.180603   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300 ).networkadapters[0]).ipaddresses[0]
	I0807 18:32:22.752388   12940 main.go:141] libmachine: [stdout =====>] : 
	I0807 18:32:22.752388   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:32:23.763322   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300 ).state
	I0807 18:32:26.093304   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:32:26.093304   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:32:26.094210   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300 ).networkadapters[0]).ipaddresses[0]
	I0807 18:32:28.787475   12940 main.go:141] libmachine: [stdout =====>] : 
	I0807 18:32:28.787475   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:32:29.802212   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300 ).state
	I0807 18:32:32.121773   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:32:32.121773   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:32:32.122184   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300 ).networkadapters[0]).ipaddresses[0]
	I0807 18:32:34.721267   12940 main.go:141] libmachine: [stdout =====>] : 
	I0807 18:32:34.721628   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:32:35.731816   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300 ).state
	I0807 18:32:38.005919   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:32:38.005919   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:32:38.005919   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300 ).networkadapters[0]).ipaddresses[0]
	I0807 18:32:40.621098   12940 main.go:141] libmachine: [stdout =====>] : 
	I0807 18:32:40.621098   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:32:41.624640   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300 ).state
	I0807 18:32:44.047429   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:32:44.047429   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:32:44.047429   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300 ).networkadapters[0]).ipaddresses[0]
	I0807 18:32:46.804544   12940 main.go:141] libmachine: [stdout =====>] : 172.28.224.88
	
	I0807 18:32:46.804652   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:32:46.804732   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300 ).state
	I0807 18:32:49.077933   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:32:49.078179   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:32:49.078179   12940 machine.go:94] provisionDockerMachine start ...
	I0807 18:32:49.078443   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300 ).state
	I0807 18:32:51.355441   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:32:51.355441   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:32:51.356261   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300 ).networkadapters[0]).ipaddresses[0]
	I0807 18:32:54.084730   12940 main.go:141] libmachine: [stdout =====>] : 172.28.224.88
	
	I0807 18:32:54.084730   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:32:54.094124   12940 main.go:141] libmachine: Using SSH client type: native
	I0807 18:32:54.105756   12940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.224.88 22 <nil> <nil>}
	I0807 18:32:54.106719   12940 main.go:141] libmachine: About to run SSH command:
	hostname
	I0807 18:32:54.239935   12940 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0807 18:32:54.239935   12940 buildroot.go:166] provisioning hostname "ha-766300"
	I0807 18:32:54.239935   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300 ).state
	I0807 18:32:56.489881   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:32:56.489881   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:32:56.489999   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300 ).networkadapters[0]).ipaddresses[0]
	I0807 18:32:59.201256   12940 main.go:141] libmachine: [stdout =====>] : 172.28.224.88
	
	I0807 18:32:59.201754   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:32:59.207968   12940 main.go:141] libmachine: Using SSH client type: native
	I0807 18:32:59.208679   12940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.224.88 22 <nil> <nil>}
	I0807 18:32:59.208679   12940 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-766300 && echo "ha-766300" | sudo tee /etc/hostname
	I0807 18:32:59.379815   12940 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-766300
	
	I0807 18:32:59.379923   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300 ).state
	I0807 18:33:01.638892   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:33:01.638892   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:33:01.638892   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300 ).networkadapters[0]).ipaddresses[0]
	I0807 18:33:04.381705   12940 main.go:141] libmachine: [stdout =====>] : 172.28.224.88
	
	I0807 18:33:04.381705   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:33:04.388668   12940 main.go:141] libmachine: Using SSH client type: native
	I0807 18:33:04.389450   12940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.224.88 22 <nil> <nil>}
	I0807 18:33:04.389450   12940 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-766300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-766300/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-766300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0807 18:33:04.545875   12940 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0807 18:33:04.545875   12940 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0807 18:33:04.545875   12940 buildroot.go:174] setting up certificates
	I0807 18:33:04.545875   12940 provision.go:84] configureAuth start
	I0807 18:33:04.545875   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300 ).state
	I0807 18:33:06.733541   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:33:06.733541   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:33:06.733740   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300 ).networkadapters[0]).ipaddresses[0]
	I0807 18:33:09.340737   12940 main.go:141] libmachine: [stdout =====>] : 172.28.224.88
	
	I0807 18:33:09.340737   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:33:09.341206   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300 ).state
	I0807 18:33:11.545499   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:33:11.545499   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:33:11.545499   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300 ).networkadapters[0]).ipaddresses[0]
	I0807 18:33:14.246402   12940 main.go:141] libmachine: [stdout =====>] : 172.28.224.88
	
	I0807 18:33:14.246402   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:33:14.247421   12940 provision.go:143] copyHostCerts
	I0807 18:33:14.247536   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0807 18:33:14.247536   12940 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0807 18:33:14.247536   12940 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0807 18:33:14.248457   12940 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0807 18:33:14.251015   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0807 18:33:14.251472   12940 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0807 18:33:14.251595   12940 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0807 18:33:14.252176   12940 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0807 18:33:14.253666   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0807 18:33:14.254097   12940 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0807 18:33:14.254213   12940 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0807 18:33:14.254564   12940 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0807 18:33:14.256334   12940 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-766300 san=[127.0.0.1 172.28.224.88 ha-766300 localhost minikube]
	I0807 18:33:14.405536   12940 provision.go:177] copyRemoteCerts
	I0807 18:33:14.417023   12940 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0807 18:33:14.417023   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300 ).state
	I0807 18:33:16.773166   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:33:16.773421   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:33:16.773504   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300 ).networkadapters[0]).ipaddresses[0]
	I0807 18:33:19.553634   12940 main.go:141] libmachine: [stdout =====>] : 172.28.224.88
	
	I0807 18:33:19.553634   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:33:19.555300   12940 sshutil.go:53] new ssh client: &{IP:172.28.224.88 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-766300\id_rsa Username:docker}
	I0807 18:33:19.661847   12940 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.2447573s)
	I0807 18:33:19.661847   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0807 18:33:19.661847   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0807 18:33:19.721678   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0807 18:33:19.722468   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes)
	I0807 18:33:19.772052   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0807 18:33:19.772850   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0807 18:33:19.826308   12940 provision.go:87] duration metric: took 15.2802376s to configureAuth
	I0807 18:33:19.826371   12940 buildroot.go:189] setting minikube options for container-runtime
	I0807 18:33:19.826590   12940 config.go:182] Loaded profile config "ha-766300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 18:33:19.826590   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300 ).state
	I0807 18:33:22.069848   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:33:22.069848   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:33:22.069848   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300 ).networkadapters[0]).ipaddresses[0]
	I0807 18:33:24.688397   12940 main.go:141] libmachine: [stdout =====>] : 172.28.224.88
	
	I0807 18:33:24.689022   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:33:24.694501   12940 main.go:141] libmachine: Using SSH client type: native
	I0807 18:33:24.695222   12940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.224.88 22 <nil> <nil>}
	I0807 18:33:24.695222   12940 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0807 18:33:24.821880   12940 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0807 18:33:24.821984   12940 buildroot.go:70] root file system type: tmpfs
	I0807 18:33:24.822078   12940 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0807 18:33:24.822266   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300 ).state
	I0807 18:33:27.051363   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:33:27.051632   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:33:27.051730   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300 ).networkadapters[0]).ipaddresses[0]
	I0807 18:33:29.675102   12940 main.go:141] libmachine: [stdout =====>] : 172.28.224.88
	
	I0807 18:33:29.675102   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:33:29.681273   12940 main.go:141] libmachine: Using SSH client type: native
	I0807 18:33:29.681998   12940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.224.88 22 <nil> <nil>}
	I0807 18:33:29.681998   12940 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0807 18:33:29.864108   12940 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0807 18:33:29.864108   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300 ).state
	I0807 18:33:32.038619   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:33:32.038619   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:33:32.038619   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300 ).networkadapters[0]).ipaddresses[0]
	I0807 18:33:34.658790   12940 main.go:141] libmachine: [stdout =====>] : 172.28.224.88
	
	I0807 18:33:34.658790   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:33:34.664637   12940 main.go:141] libmachine: Using SSH client type: native
	I0807 18:33:34.665390   12940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.224.88 22 <nil> <nil>}
	I0807 18:33:34.665390   12940 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0807 18:33:36.914340   12940 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0807 18:33:36.914531   12940 machine.go:97] duration metric: took 47.8357405s to provisionDockerMachine
	I0807 18:33:36.914588   12940 client.go:171] duration metric: took 2m0.1838657s to LocalClient.Create
	I0807 18:33:36.914588   12940 start.go:167] duration metric: took 2m0.1838657s to libmachine.API.Create "ha-766300"
	I0807 18:33:36.914661   12940 start.go:293] postStartSetup for "ha-766300" (driver="hyperv")
	I0807 18:33:36.914661   12940 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0807 18:33:36.929053   12940 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0807 18:33:36.929053   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300 ).state
	I0807 18:33:39.161162   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:33:39.161162   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:33:39.161162   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300 ).networkadapters[0]).ipaddresses[0]
	I0807 18:33:41.781457   12940 main.go:141] libmachine: [stdout =====>] : 172.28.224.88
	
	I0807 18:33:41.782249   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:33:41.782767   12940 sshutil.go:53] new ssh client: &{IP:172.28.224.88 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-766300\id_rsa Username:docker}
	I0807 18:33:41.888710   12940 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9595927s)
	I0807 18:33:41.899659   12940 ssh_runner.go:195] Run: cat /etc/os-release
	I0807 18:33:41.906576   12940 info.go:137] Remote host: Buildroot 2023.02.9
	I0807 18:33:41.906674   12940 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0807 18:33:41.906748   12940 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0807 18:33:41.908183   12940 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96602.pem -> 96602.pem in /etc/ssl/certs
	I0807 18:33:41.908250   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96602.pem -> /etc/ssl/certs/96602.pem
	I0807 18:33:41.920518   12940 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0807 18:33:41.941322   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96602.pem --> /etc/ssl/certs/96602.pem (1708 bytes)
	I0807 18:33:42.001613   12940 start.go:296] duration metric: took 5.0868869s for postStartSetup
	I0807 18:33:42.005172   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300 ).state
	I0807 18:33:44.208431   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:33:44.208843   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:33:44.208931   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300 ).networkadapters[0]).ipaddresses[0]
	I0807 18:33:46.810933   12940 main.go:141] libmachine: [stdout =====>] : 172.28.224.88
	
	I0807 18:33:46.811849   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:33:46.812072   12940 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\config.json ...
	I0807 18:33:46.815213   12940 start.go:128] duration metric: took 2m10.0889536s to createHost
	I0807 18:33:46.815213   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300 ).state
	I0807 18:33:49.003942   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:33:49.004858   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:33:49.004955   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300 ).networkadapters[0]).ipaddresses[0]
	I0807 18:33:51.585001   12940 main.go:141] libmachine: [stdout =====>] : 172.28.224.88
	
	I0807 18:33:51.585281   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:33:51.590192   12940 main.go:141] libmachine: Using SSH client type: native
	I0807 18:33:51.591350   12940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.224.88 22 <nil> <nil>}
	I0807 18:33:51.591350   12940 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0807 18:33:51.716241   12940 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723055631.720639473
	
	I0807 18:33:51.716241   12940 fix.go:216] guest clock: 1723055631.720639473
	I0807 18:33:51.716323   12940 fix.go:229] Guest: 2024-08-07 18:33:51.720639473 +0000 UTC Remote: 2024-08-07 18:33:46.8152135 +0000 UTC m=+135.816939601 (delta=4.905425973s)
	I0807 18:33:51.716323   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300 ).state
	I0807 18:33:53.900081   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:33:53.900081   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:33:53.901028   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300 ).networkadapters[0]).ipaddresses[0]
	I0807 18:33:56.494290   12940 main.go:141] libmachine: [stdout =====>] : 172.28.224.88
	
	I0807 18:33:56.495304   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:33:56.500826   12940 main.go:141] libmachine: Using SSH client type: native
	I0807 18:33:56.501571   12940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.224.88 22 <nil> <nil>}
	I0807 18:33:56.501571   12940 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1723055631
	I0807 18:33:56.635040   12940 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Aug  7 18:33:51 UTC 2024
	
	I0807 18:33:56.635040   12940 fix.go:236] clock set: Wed Aug  7 18:33:51 UTC 2024
	 (err=<nil>)
	I0807 18:33:56.635040   12940 start.go:83] releasing machines lock for "ha-766300", held for 2m19.9092657s
	I0807 18:33:56.635825   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300 ).state
	I0807 18:33:58.832313   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:33:58.832313   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:33:58.832579   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300 ).networkadapters[0]).ipaddresses[0]
	I0807 18:34:01.475567   12940 main.go:141] libmachine: [stdout =====>] : 172.28.224.88
	
	I0807 18:34:01.475567   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:34:01.479254   12940 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0807 18:34:01.479254   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300 ).state
	I0807 18:34:01.490388   12940 ssh_runner.go:195] Run: cat /version.json
	I0807 18:34:01.490388   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300 ).state
	I0807 18:34:03.793685   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:34:03.793685   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:34:03.793685   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300 ).networkadapters[0]).ipaddresses[0]
	I0807 18:34:03.807724   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:34:03.807724   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:34:03.807724   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300 ).networkadapters[0]).ipaddresses[0]
	I0807 18:34:06.538310   12940 main.go:141] libmachine: [stdout =====>] : 172.28.224.88
	
	I0807 18:34:06.538310   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:34:06.538788   12940 sshutil.go:53] new ssh client: &{IP:172.28.224.88 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-766300\id_rsa Username:docker}
	I0807 18:34:06.560228   12940 main.go:141] libmachine: [stdout =====>] : 172.28.224.88
	
	I0807 18:34:06.560228   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:34:06.561073   12940 sshutil.go:53] new ssh client: &{IP:172.28.224.88 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-766300\id_rsa Username:docker}
	I0807 18:34:06.628873   12940 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.1495531s)
	W0807 18:34:06.628873   12940 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0807 18:34:06.661675   12940 ssh_runner.go:235] Completed: cat /version.json: (5.1709464s)
	I0807 18:34:06.673034   12940 ssh_runner.go:195] Run: systemctl --version
	I0807 18:34:06.697358   12940 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0807 18:34:06.707310   12940 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0807 18:34:06.720291   12940 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	W0807 18:34:06.753257   12940 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0807 18:34:06.753257   12940 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0807 18:34:06.753657   12940 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0807 18:34:06.753775   12940 start.go:495] detecting cgroup driver to use...
	I0807 18:34:06.754076   12940 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0807 18:34:06.804595   12940 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0807 18:34:06.838249   12940 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0807 18:34:06.856752   12940 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0807 18:34:06.869936   12940 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0807 18:34:06.902253   12940 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0807 18:34:06.933553   12940 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0807 18:34:06.964327   12940 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0807 18:34:06.995523   12940 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0807 18:34:07.027350   12940 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0807 18:34:07.058217   12940 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0807 18:34:07.091529   12940 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0807 18:34:07.120536   12940 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0807 18:34:07.151359   12940 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0807 18:34:07.184582   12940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 18:34:07.400558   12940 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0807 18:34:07.435779   12940 start.go:495] detecting cgroup driver to use...
	I0807 18:34:07.447958   12940 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0807 18:34:07.487663   12940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0807 18:34:07.520376   12940 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0807 18:34:07.575738   12940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0807 18:34:07.607297   12940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0807 18:34:07.646244   12940 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0807 18:34:07.707574   12940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0807 18:34:07.732666   12940 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0807 18:34:07.778281   12940 ssh_runner.go:195] Run: which cri-dockerd
	I0807 18:34:07.796409   12940 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0807 18:34:07.812506   12940 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0807 18:34:07.854201   12940 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0807 18:34:08.068485   12940 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0807 18:34:08.259510   12940 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0807 18:34:08.259510   12940 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0807 18:34:08.307269   12940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 18:34:08.506023   12940 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0807 18:34:11.118054   12940 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6118509s)
	I0807 18:34:11.130256   12940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0807 18:34:11.169883   12940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0807 18:34:11.204088   12940 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0807 18:34:11.412805   12940 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0807 18:34:11.606832   12940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 18:34:11.810522   12940 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0807 18:34:11.852807   12940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0807 18:34:11.891013   12940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 18:34:12.106919   12940 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0807 18:34:12.219487   12940 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0807 18:34:12.231869   12940 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0807 18:34:12.242462   12940 start.go:563] Will wait 60s for crictl version
	I0807 18:34:12.254687   12940 ssh_runner.go:195] Run: which crictl
	I0807 18:34:12.271674   12940 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0807 18:34:12.330287   12940 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.1
	RuntimeApiVersion:  v1
	I0807 18:34:12.341736   12940 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0807 18:34:12.386561   12940 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0807 18:34:12.425940   12940 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...
	I0807 18:34:12.426251   12940 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0807 18:34:12.430247   12940 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0807 18:34:12.430247   12940 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0807 18:34:12.430247   12940 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0807 18:34:12.430247   12940 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:f6:3a:6a Flags:up|broadcast|multicast|running}
	I0807 18:34:12.432961   12940 ip.go:210] interface addr: fe80::e7eb:b592:d388:ff99/64
	I0807 18:34:12.432961   12940 ip.go:210] interface addr: 172.28.224.1/20
	I0807 18:34:12.447303   12940 ssh_runner.go:195] Run: grep 172.28.224.1	host.minikube.internal$ /etc/hosts
	I0807 18:34:12.453964   12940 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.28.224.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0807 18:34:12.488845   12940 kubeadm.go:883] updating cluster {Name:ha-766300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3
ClusterName:ha-766300 Namespace:default APIServerHAVIP:172.28.239.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.224.88 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0807 18:34:12.488845   12940 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0807 18:34:12.499657   12940 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0807 18:34:12.527730   12940 docker.go:685] Got preloaded images: 
	I0807 18:34:12.527730   12940 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.3 wasn't preloaded
	I0807 18:34:12.544382   12940 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0807 18:34:12.577901   12940 ssh_runner.go:195] Run: which lz4
	I0807 18:34:12.585033   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0807 18:34:12.596712   12940 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0807 18:34:12.603414   12940 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0807 18:34:12.603551   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359612007 bytes)
	I0807 18:34:14.980666   12940 docker.go:649] duration metric: took 2.3953474s to copy over tarball
	I0807 18:34:14.993942   12940 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0807 18:34:23.805676   12940 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.8116212s)
	I0807 18:34:23.805676   12940 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0807 18:34:23.868693   12940 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0807 18:34:23.886885   12940 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0807 18:34:23.930974   12940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 18:34:24.144244   12940 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0807 18:34:27.490789   12940 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.3465018s)
	I0807 18:34:27.500614   12940 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0807 18:34:27.530138   12940 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0807 18:34:27.530204   12940 cache_images.go:84] Images are preloaded, skipping loading
	I0807 18:34:27.530265   12940 kubeadm.go:934] updating node { 172.28.224.88 8443 v1.30.3 docker true true} ...
	I0807 18:34:27.530528   12940 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-766300 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.28.224.88
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-766300 Namespace:default APIServerHAVIP:172.28.239.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0807 18:34:27.539667   12940 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0807 18:34:27.608811   12940 cni.go:84] Creating CNI manager for ""
	I0807 18:34:27.608811   12940 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0807 18:34:27.608811   12940 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0807 18:34:27.608811   12940 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.28.224.88 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-766300 NodeName:ha-766300 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.28.224.88"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.28.224.88 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0807 18:34:27.609831   12940 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.28.224.88
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-766300"
	  kubeletExtraArgs:
	    node-ip: 172.28.224.88
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.28.224.88"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0807 18:34:27.609831   12940 kube-vip.go:115] generating kube-vip config ...
	I0807 18:34:27.621800   12940 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0807 18:34:27.647675   12940 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0807 18:34:27.647929   12940 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.28.239.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0807 18:34:27.659477   12940 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0807 18:34:27.675601   12940 binaries.go:44] Found k8s binaries, skipping transfer
	I0807 18:34:27.686462   12940 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0807 18:34:27.704463   12940 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0807 18:34:27.736773   12940 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0807 18:34:27.768830   12940 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0807 18:34:27.800636   12940 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0807 18:34:27.847978   12940 ssh_runner.go:195] Run: grep 172.28.239.254	control-plane.minikube.internal$ /etc/hosts
	I0807 18:34:27.854096   12940 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.28.239.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0807 18:34:27.888705   12940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 18:34:28.099801   12940 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0807 18:34:28.134589   12940 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300 for IP: 172.28.224.88
	I0807 18:34:28.134589   12940 certs.go:194] generating shared ca certs ...
	I0807 18:34:28.134656   12940 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 18:34:28.135397   12940 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0807 18:34:28.135844   12940 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0807 18:34:28.136036   12940 certs.go:256] generating profile certs ...
	I0807 18:34:28.136622   12940 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\client.key
	I0807 18:34:28.136622   12940 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\client.crt with IP's: []
	I0807 18:34:28.349075   12940 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\client.crt ...
	I0807 18:34:28.349075   12940 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\client.crt: {Name:mk8e2227ff939c73df9ce8c26a17f9ee0bfeb14e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 18:34:28.351039   12940 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\client.key ...
	I0807 18:34:28.351039   12940 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\client.key: {Name:mk9d63ee8d9eb9ecb007518cfee4f98e367f66bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 18:34:28.351366   12940 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\apiserver.key.3bcbab52
	I0807 18:34:28.352407   12940 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\apiserver.crt.3bcbab52 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.28.224.88 172.28.239.254]
	I0807 18:34:28.630425   12940 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\apiserver.crt.3bcbab52 ...
	I0807 18:34:28.630425   12940 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\apiserver.crt.3bcbab52: {Name:mke152e16ed39bf569fcdb17970a67302a92a3fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 18:34:28.632090   12940 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\apiserver.key.3bcbab52 ...
	I0807 18:34:28.632090   12940 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\apiserver.key.3bcbab52: {Name:mk7a3da08b84fc181e61ee4963380280cd45725a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 18:34:28.632090   12940 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\apiserver.crt.3bcbab52 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\apiserver.crt
	I0807 18:34:28.649153   12940 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\apiserver.key.3bcbab52 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\apiserver.key
	I0807 18:34:28.650590   12940 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\proxy-client.key
	I0807 18:34:28.650706   12940 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\proxy-client.crt with IP's: []
	I0807 18:34:29.173255   12940 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\proxy-client.crt ...
	I0807 18:34:29.173255   12940 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\proxy-client.crt: {Name:mke4c5cb8c20ed69c24c3bf8303d9fc9b1d9851c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 18:34:29.174265   12940 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\proxy-client.key ...
	I0807 18:34:29.174265   12940 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\proxy-client.key: {Name:mkd05b87576d91ba8935f3f6110ddcf438efe15e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 18:34:29.175838   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0807 18:34:29.176393   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0807 18:34:29.176610   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0807 18:34:29.176771   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0807 18:34:29.176771   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0807 18:34:29.176771   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0807 18:34:29.176771   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0807 18:34:29.186991   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0807 18:34:29.188068   12940 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9660.pem (1338 bytes)
	W0807 18:34:29.188667   12940 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9660_empty.pem, impossibly tiny 0 bytes
	I0807 18:34:29.188667   12940 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0807 18:34:29.189094   12940 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0807 18:34:29.189548   12940 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0807 18:34:29.189732   12940 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0807 18:34:29.190200   12940 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96602.pem (1708 bytes)
	I0807 18:34:29.190200   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0807 18:34:29.190767   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9660.pem -> /usr/share/ca-certificates/9660.pem
	I0807 18:34:29.191110   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96602.pem -> /usr/share/ca-certificates/96602.pem
	I0807 18:34:29.192106   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0807 18:34:29.246778   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0807 18:34:29.284463   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0807 18:34:29.322154   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0807 18:34:29.375620   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0807 18:34:29.421227   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0807 18:34:29.465800   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0807 18:34:29.515067   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0807 18:34:29.566214   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0807 18:34:29.611887   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9660.pem --> /usr/share/ca-certificates/9660.pem (1338 bytes)
	I0807 18:34:29.657571   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96602.pem --> /usr/share/ca-certificates/96602.pem (1708 bytes)
	I0807 18:34:29.703259   12940 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0807 18:34:29.749997   12940 ssh_runner.go:195] Run: openssl version
	I0807 18:34:29.774335   12940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9660.pem && ln -fs /usr/share/ca-certificates/9660.pem /etc/ssl/certs/9660.pem"
	I0807 18:34:29.804716   12940 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9660.pem
	I0807 18:34:29.811798   12940 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  7 17:50 /usr/share/ca-certificates/9660.pem
	I0807 18:34:29.823751   12940 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9660.pem
	I0807 18:34:29.843063   12940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9660.pem /etc/ssl/certs/51391683.0"
	I0807 18:34:29.877461   12940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/96602.pem && ln -fs /usr/share/ca-certificates/96602.pem /etc/ssl/certs/96602.pem"
	I0807 18:34:29.907546   12940 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/96602.pem
	I0807 18:34:29.915285   12940 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  7 17:50 /usr/share/ca-certificates/96602.pem
	I0807 18:34:29.927579   12940 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/96602.pem
	I0807 18:34:29.949144   12940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/96602.pem /etc/ssl/certs/3ec20f2e.0"
	I0807 18:34:29.980308   12940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0807 18:34:30.010191   12940 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0807 18:34:30.016644   12940 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  7 17:33 /usr/share/ca-certificates/minikubeCA.pem
	I0807 18:34:30.029944   12940 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0807 18:34:30.051012   12940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0807 18:34:30.083524   12940 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0807 18:34:30.090620   12940 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0807 18:34:30.090998   12940 kubeadm.go:392] StartCluster: {Name:ha-766300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clu
sterName:ha-766300 Namespace:default APIServerHAVIP:172.28.239.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.224.88 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 18:34:30.100012   12940 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0807 18:34:30.138140   12940 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0807 18:34:30.168933   12940 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0807 18:34:30.196415   12940 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0807 18:34:30.213493   12940 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0807 18:34:30.213672   12940 kubeadm.go:157] found existing configuration files:
	
	I0807 18:34:30.225925   12940 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0807 18:34:30.243980   12940 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0807 18:34:30.255593   12940 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0807 18:34:30.283231   12940 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0807 18:34:30.299827   12940 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0807 18:34:30.311414   12940 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0807 18:34:30.340369   12940 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0807 18:34:30.356612   12940 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0807 18:34:30.368550   12940 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0807 18:34:30.397906   12940 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0807 18:34:30.415774   12940 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0807 18:34:30.428365   12940 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0807 18:34:30.445176   12940 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0807 18:34:30.921603   12940 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0807 18:34:44.308650   12940 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0807 18:34:44.308650   12940 kubeadm.go:310] [preflight] Running pre-flight checks
	I0807 18:34:44.308650   12940 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0807 18:34:44.308650   12940 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0807 18:34:44.309932   12940 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0807 18:34:44.310083   12940 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0807 18:34:44.314465   12940 out.go:204]   - Generating certificates and keys ...
	I0807 18:34:44.314991   12940 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0807 18:34:44.315217   12940 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0807 18:34:44.315405   12940 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0807 18:34:44.315540   12940 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0807 18:34:44.315540   12940 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0807 18:34:44.315540   12940 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0807 18:34:44.315540   12940 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0807 18:34:44.315540   12940 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-766300 localhost] and IPs [172.28.224.88 127.0.0.1 ::1]
	I0807 18:34:44.316083   12940 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0807 18:34:44.316491   12940 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-766300 localhost] and IPs [172.28.224.88 127.0.0.1 ::1]
	I0807 18:34:44.316583   12940 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0807 18:34:44.316720   12940 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0807 18:34:44.316720   12940 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0807 18:34:44.317283   12940 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0807 18:34:44.317283   12940 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0807 18:34:44.317615   12940 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0807 18:34:44.317814   12940 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0807 18:34:44.317814   12940 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0807 18:34:44.317814   12940 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0807 18:34:44.317814   12940 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0807 18:34:44.318360   12940 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0807 18:34:44.322136   12940 out.go:204]   - Booting up control plane ...
	I0807 18:34:44.322369   12940 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0807 18:34:44.322641   12940 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0807 18:34:44.322837   12940 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0807 18:34:44.322927   12940 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0807 18:34:44.322927   12940 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0807 18:34:44.322927   12940 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0807 18:34:44.322927   12940 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0807 18:34:44.323937   12940 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0807 18:34:44.323937   12940 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 503.060662ms
	I0807 18:34:44.323937   12940 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0807 18:34:44.323937   12940 kubeadm.go:310] [api-check] The API server is healthy after 8.002038868s
	I0807 18:34:44.323937   12940 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0807 18:34:44.324926   12940 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0807 18:34:44.324926   12940 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0807 18:34:44.324926   12940 kubeadm.go:310] [mark-control-plane] Marking the node ha-766300 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0807 18:34:44.325587   12940 kubeadm.go:310] [bootstrap-token] Using token: flhfyh.589jacjrbykepsdi
	I0807 18:34:44.330287   12940 out.go:204]   - Configuring RBAC rules ...
	I0807 18:34:44.330467   12940 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0807 18:34:44.330637   12940 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0807 18:34:44.330987   12940 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0807 18:34:44.331316   12940 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0807 18:34:44.331662   12940 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0807 18:34:44.331845   12940 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0807 18:34:44.332124   12940 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0807 18:34:44.332178   12940 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0807 18:34:44.332349   12940 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0807 18:34:44.332349   12940 kubeadm.go:310] 
	I0807 18:34:44.332514   12940 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0807 18:34:44.332566   12940 kubeadm.go:310] 
	I0807 18:34:44.332783   12940 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0807 18:34:44.332839   12940 kubeadm.go:310] 
	I0807 18:34:44.332952   12940 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0807 18:34:44.333115   12940 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0807 18:34:44.333221   12940 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0807 18:34:44.333278   12940 kubeadm.go:310] 
	I0807 18:34:44.333475   12940 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0807 18:34:44.333530   12940 kubeadm.go:310] 
	I0807 18:34:44.333687   12940 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0807 18:34:44.333687   12940 kubeadm.go:310] 
	I0807 18:34:44.333805   12940 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0807 18:34:44.334082   12940 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0807 18:34:44.334312   12940 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0807 18:34:44.334370   12940 kubeadm.go:310] 
	I0807 18:34:44.334479   12940 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0807 18:34:44.334479   12940 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0807 18:34:44.334479   12940 kubeadm.go:310] 
	I0807 18:34:44.334479   12940 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token flhfyh.589jacjrbykepsdi \
	I0807 18:34:44.334479   12940 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:54111b4769238fc338d0511ba57c177f732a6d68c0b9cd0aa8cacbbf3c79643b \
	I0807 18:34:44.334479   12940 kubeadm.go:310] 	--control-plane 
	I0807 18:34:44.334479   12940 kubeadm.go:310] 
	I0807 18:34:44.334479   12940 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0807 18:34:44.334479   12940 kubeadm.go:310] 
	I0807 18:34:44.334479   12940 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token flhfyh.589jacjrbykepsdi \
	I0807 18:34:44.335783   12940 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:54111b4769238fc338d0511ba57c177f732a6d68c0b9cd0aa8cacbbf3c79643b 
	I0807 18:34:44.335837   12940 cni.go:84] Creating CNI manager for ""
	I0807 18:34:44.335837   12940 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0807 18:34:44.338995   12940 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0807 18:34:44.356191   12940 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0807 18:34:44.364164   12940 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0807 18:34:44.364224   12940 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0807 18:34:44.412094   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0807 18:34:45.102861   12940 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0807 18:34:45.117835   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:34:45.122600   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-766300 minikube.k8s.io/updated_at=2024_08_07T18_34_45_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e minikube.k8s.io/name=ha-766300 minikube.k8s.io/primary=true
	I0807 18:34:45.140335   12940 ops.go:34] apiserver oom_adj: -16
	I0807 18:34:45.343424   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:34:45.850343   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:34:46.346500   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:34:46.857865   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:34:47.356483   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:34:47.856485   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:34:48.361122   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:34:48.844631   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:34:49.358496   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:34:49.856859   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:34:50.346475   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:34:50.846848   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:34:51.350767   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:34:51.854577   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:34:52.351212   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:34:52.853267   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:34:53.355509   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:34:53.856961   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:34:54.346257   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:34:54.850142   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:34:55.356066   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:34:55.844393   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:34:56.347816   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:34:56.850566   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:34:57.349023   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:34:57.516203   12940 kubeadm.go:1113] duration metric: took 12.413183s to wait for elevateKubeSystemPrivileges
	I0807 18:34:57.516316   12940 kubeadm.go:394] duration metric: took 27.4249666s to StartCluster
	I0807 18:34:57.516316   12940 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 18:34:57.516316   12940 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0807 18:34:57.517390   12940 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 18:34:57.519005   12940 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0807 18:34:57.519188   12940 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.28.224.88 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0807 18:34:57.519188   12940 start.go:241] waiting for startup goroutines ...
	I0807 18:34:57.519188   12940 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0807 18:34:57.519375   12940 addons.go:69] Setting storage-provisioner=true in profile "ha-766300"
	I0807 18:34:57.519375   12940 addons.go:69] Setting default-storageclass=true in profile "ha-766300"
	I0807 18:34:57.519523   12940 addons.go:234] Setting addon storage-provisioner=true in "ha-766300"
	I0807 18:34:57.519523   12940 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-766300"
	I0807 18:34:57.519668   12940 host.go:66] Checking if "ha-766300" exists ...
	I0807 18:34:57.519742   12940 config.go:182] Loaded profile config "ha-766300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 18:34:57.520644   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300 ).state
	I0807 18:34:57.521153   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300 ).state
	I0807 18:34:57.731069   12940 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.28.224.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0807 18:34:58.429631   12940 start.go:971] {"host.minikube.internal": 172.28.224.1} host record injected into CoreDNS's ConfigMap
	I0807 18:34:59.923947   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:34:59.923947   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:34:59.924747   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:34:59.924747   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:34:59.925677   12940 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0807 18:34:59.926571   12940 kapi.go:59] client config for ha-766300: &rest.Config{Host:"https://172.28.239.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-766300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-766300\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1da64c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0807 18:34:59.927913   12940 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0807 18:34:59.928172   12940 cert_rotation.go:137] Starting client certificate rotation controller
	I0807 18:34:59.928547   12940 addons.go:234] Setting addon default-storageclass=true in "ha-766300"
	I0807 18:34:59.928547   12940 host.go:66] Checking if "ha-766300" exists ...
	I0807 18:34:59.929884   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300 ).state
	I0807 18:34:59.930712   12940 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0807 18:34:59.930712   12940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0807 18:34:59.930784   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300 ).state
	I0807 18:35:02.353664   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:35:02.353664   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:35:02.353785   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300 ).networkadapters[0]).ipaddresses[0]
	I0807 18:35:02.414526   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:35:02.414726   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:35:02.414726   12940 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0807 18:35:02.414895   12940 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0807 18:35:02.414981   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300 ).state
	I0807 18:35:04.881784   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:35:04.881784   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:35:04.882019   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300 ).networkadapters[0]).ipaddresses[0]
	I0807 18:35:05.237113   12940 main.go:141] libmachine: [stdout =====>] : 172.28.224.88
	
	I0807 18:35:05.237113   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:35:05.237113   12940 sshutil.go:53] new ssh client: &{IP:172.28.224.88 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-766300\id_rsa Username:docker}
	I0807 18:35:05.392764   12940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0807 18:35:07.610036   12940 main.go:141] libmachine: [stdout =====>] : 172.28.224.88
	
	I0807 18:35:07.610036   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:35:07.611362   12940 sshutil.go:53] new ssh client: &{IP:172.28.224.88 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-766300\id_rsa Username:docker}
	I0807 18:35:07.744490   12940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0807 18:35:07.905924   12940 round_trippers.go:463] GET https://172.28.239.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0807 18:35:07.905959   12940 round_trippers.go:469] Request Headers:
	I0807 18:35:07.905959   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:35:07.906011   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:35:07.921022   12940 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0807 18:35:07.922156   12940 round_trippers.go:463] PUT https://172.28.239.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0807 18:35:07.922156   12940 round_trippers.go:469] Request Headers:
	I0807 18:35:07.922156   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:35:07.922156   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:35:07.922156   12940 round_trippers.go:473]     Content-Type: application/json
	I0807 18:35:07.924751   12940 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 18:35:07.929005   12940 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0807 18:35:07.932966   12940 addons.go:510] duration metric: took 10.4136452s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0807 18:35:07.933137   12940 start.go:246] waiting for cluster config update ...
	I0807 18:35:07.933137   12940 start.go:255] writing updated cluster config ...
	I0807 18:35:07.936483   12940 out.go:177] 
	I0807 18:35:07.948348   12940 config.go:182] Loaded profile config "ha-766300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 18:35:07.948348   12940 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\config.json ...
	I0807 18:35:07.954332   12940 out.go:177] * Starting "ha-766300-m02" control-plane node in "ha-766300" cluster
	I0807 18:35:07.957520   12940 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0807 18:35:07.957520   12940 cache.go:56] Caching tarball of preloaded images
	I0807 18:35:07.957520   12940 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0807 18:35:07.958338   12940 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0807 18:35:07.958338   12940 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\config.json ...
	I0807 18:35:07.962344   12940 start.go:360] acquireMachinesLock for ha-766300-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 18:35:07.962344   12940 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-766300-m02"
	I0807 18:35:07.962629   12940 start.go:93] Provisioning new machine with config: &{Name:ha-766300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.3 ClusterName:ha-766300 Namespace:default APIServerHAVIP:172.28.239.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.224.88 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0807 18:35:07.962629   12940 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0807 18:35:07.964539   12940 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0807 18:35:07.965667   12940 start.go:159] libmachine.API.Create for "ha-766300" (driver="hyperv")
	I0807 18:35:07.965824   12940 client.go:168] LocalClient.Create starting
	I0807 18:35:07.966373   12940 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0807 18:35:07.966672   12940 main.go:141] libmachine: Decoding PEM data...
	I0807 18:35:07.966750   12940 main.go:141] libmachine: Parsing certificate...
	I0807 18:35:07.966938   12940 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0807 18:35:07.967301   12940 main.go:141] libmachine: Decoding PEM data...
	I0807 18:35:07.967301   12940 main.go:141] libmachine: Parsing certificate...
	I0807 18:35:07.967466   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0807 18:35:09.914327   12940 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0807 18:35:09.914969   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:35:09.915050   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0807 18:35:11.674199   12940 main.go:141] libmachine: [stdout =====>] : False
	
	I0807 18:35:11.674418   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:35:11.674418   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0807 18:35:13.195170   12940 main.go:141] libmachine: [stdout =====>] : True
	
	I0807 18:35:13.195170   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:35:13.195170   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0807 18:35:16.993620   12940 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0807 18:35:16.993620   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:35:16.996186   12940 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0807 18:35:17.460032   12940 main.go:141] libmachine: Creating SSH key...
	I0807 18:35:18.169685   12940 main.go:141] libmachine: Creating VM...
	I0807 18:35:18.169685   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0807 18:35:21.142745   12940 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0807 18:35:21.143721   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:35:21.143721   12940 main.go:141] libmachine: Using switch "Default Switch"
	I0807 18:35:21.143942   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0807 18:35:22.924019   12940 main.go:141] libmachine: [stdout =====>] : True
	
	I0807 18:35:22.924019   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:35:22.924019   12940 main.go:141] libmachine: Creating VHD
	I0807 18:35:22.924019   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-766300-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0807 18:35:26.845077   12940 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-766300-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 75EE8676-3085-4590-9428-31ED3F0D41FD
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0807 18:35:26.845077   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:35:26.845346   12940 main.go:141] libmachine: Writing magic tar header
	I0807 18:35:26.845346   12940 main.go:141] libmachine: Writing SSH key tar header
	I0807 18:35:26.856021   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-766300-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-766300-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0807 18:35:30.118066   12940 main.go:141] libmachine: [stdout =====>] : 
	I0807 18:35:30.118066   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:35:30.118201   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-766300-m02\disk.vhd' -SizeBytes 20000MB
	I0807 18:35:32.744624   12940 main.go:141] libmachine: [stdout =====>] : 
	I0807 18:35:32.744624   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:35:32.744624   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-766300-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-766300-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0807 18:35:36.478358   12940 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-766300-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0807 18:35:36.478358   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:35:36.478358   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-766300-m02 -DynamicMemoryEnabled $false
	I0807 18:35:38.780860   12940 main.go:141] libmachine: [stdout =====>] : 
	I0807 18:35:38.781192   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:35:38.781277   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-766300-m02 -Count 2
	I0807 18:35:41.045158   12940 main.go:141] libmachine: [stdout =====>] : 
	I0807 18:35:41.045158   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:35:41.045158   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-766300-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-766300-m02\boot2docker.iso'
	I0807 18:35:43.715991   12940 main.go:141] libmachine: [stdout =====>] : 
	I0807 18:35:43.715991   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:35:43.715991   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-766300-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-766300-m02\disk.vhd'
	I0807 18:35:46.471023   12940 main.go:141] libmachine: [stdout =====>] : 
	I0807 18:35:46.471335   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:35:46.471335   12940 main.go:141] libmachine: Starting VM...
	I0807 18:35:46.471335   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-766300-m02
	I0807 18:35:49.621276   12940 main.go:141] libmachine: [stdout =====>] : 
	I0807 18:35:49.621276   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:35:49.621276   12940 main.go:141] libmachine: Waiting for host to start...
	I0807 18:35:49.622264   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m02 ).state
	I0807 18:35:52.068195   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:35:52.068195   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:35:52.068195   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 18:35:54.706800   12940 main.go:141] libmachine: [stdout =====>] : 
	I0807 18:35:54.706800   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:35:55.722607   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m02 ).state
	I0807 18:35:58.046823   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:35:58.046823   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:35:58.047734   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 18:36:00.675050   12940 main.go:141] libmachine: [stdout =====>] : 
	I0807 18:36:00.675166   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:36:01.687903   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m02 ).state
	I0807 18:36:03.986191   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:36:03.986191   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:36:03.986960   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 18:36:06.649809   12940 main.go:141] libmachine: [stdout =====>] : 
	I0807 18:36:06.650812   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:36:07.655976   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m02 ).state
	I0807 18:36:09.954647   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:36:09.954647   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:36:09.954764   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 18:36:12.599802   12940 main.go:141] libmachine: [stdout =====>] : 
	I0807 18:36:12.599837   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:36:13.613994   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m02 ).state
	I0807 18:36:15.920350   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:36:15.920350   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:36:15.920940   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 18:36:18.594353   12940 main.go:141] libmachine: [stdout =====>] : 172.28.238.183
	
	I0807 18:36:18.594870   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:36:18.595201   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m02 ).state
	I0807 18:36:20.779069   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:36:20.779069   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:36:20.779069   12940 machine.go:94] provisionDockerMachine start ...
	I0807 18:36:20.779278   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m02 ).state
	I0807 18:36:23.058274   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:36:23.058274   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:36:23.058274   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 18:36:25.701346   12940 main.go:141] libmachine: [stdout =====>] : 172.28.238.183
	
	I0807 18:36:25.702625   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:36:25.708330   12940 main.go:141] libmachine: Using SSH client type: native
	I0807 18:36:25.723897   12940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.238.183 22 <nil> <nil>}
	I0807 18:36:25.723897   12940 main.go:141] libmachine: About to run SSH command:
	hostname
	I0807 18:36:25.857976   12940 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0807 18:36:25.857976   12940 buildroot.go:166] provisioning hostname "ha-766300-m02"
	I0807 18:36:25.857976   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m02 ).state
	I0807 18:36:28.087157   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:36:28.087157   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:36:28.087796   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 18:36:30.738124   12940 main.go:141] libmachine: [stdout =====>] : 172.28.238.183
	
	I0807 18:36:30.738313   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:36:30.743881   12940 main.go:141] libmachine: Using SSH client type: native
	I0807 18:36:30.744576   12940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.238.183 22 <nil> <nil>}
	I0807 18:36:30.744576   12940 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-766300-m02 && echo "ha-766300-m02" | sudo tee /etc/hostname
	I0807 18:36:30.907709   12940 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-766300-m02
	
	I0807 18:36:30.907709   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m02 ).state
	I0807 18:36:33.110411   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:36:33.110626   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:36:33.110736   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 18:36:35.765625   12940 main.go:141] libmachine: [stdout =====>] : 172.28.238.183
	
	I0807 18:36:35.765625   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:36:35.771496   12940 main.go:141] libmachine: Using SSH client type: native
	I0807 18:36:35.771743   12940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.238.183 22 <nil> <nil>}
	I0807 18:36:35.771743   12940 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-766300-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-766300-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-766300-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0807 18:36:35.929376   12940 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0807 18:36:35.929376   12940 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0807 18:36:35.929376   12940 buildroot.go:174] setting up certificates
	I0807 18:36:35.929376   12940 provision.go:84] configureAuth start
	I0807 18:36:35.929911   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m02 ).state
	I0807 18:36:38.100883   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:36:38.100883   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:36:38.100883   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 18:36:40.733289   12940 main.go:141] libmachine: [stdout =====>] : 172.28.238.183
	
	I0807 18:36:40.733289   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:36:40.733289   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m02 ).state
	I0807 18:36:42.965503   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:36:42.965503   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:36:42.965503   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 18:36:45.631472   12940 main.go:141] libmachine: [stdout =====>] : 172.28.238.183
	
	I0807 18:36:45.631727   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:36:45.631727   12940 provision.go:143] copyHostCerts
	I0807 18:36:45.631908   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0807 18:36:45.631908   12940 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0807 18:36:45.631908   12940 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0807 18:36:45.632745   12940 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0807 18:36:45.634128   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0807 18:36:45.634858   12940 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0807 18:36:45.635063   12940 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0807 18:36:45.636219   12940 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0807 18:36:45.638229   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0807 18:36:45.638229   12940 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0807 18:36:45.638229   12940 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0807 18:36:45.638848   12940 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0807 18:36:45.639533   12940 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-766300-m02 san=[127.0.0.1 172.28.238.183 ha-766300-m02 localhost minikube]
	I0807 18:36:45.783303   12940 provision.go:177] copyRemoteCerts
	I0807 18:36:45.795863   12940 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0807 18:36:45.795863   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m02 ).state
	I0807 18:36:48.045089   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:36:48.045089   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:36:48.045444   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 18:36:50.736902   12940 main.go:141] libmachine: [stdout =====>] : 172.28.238.183
	
	I0807 18:36:50.737115   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:36:50.737515   12940 sshutil.go:53] new ssh client: &{IP:172.28.238.183 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-766300-m02\id_rsa Username:docker}
	I0807 18:36:50.843622   12940 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.0476939s)
	I0807 18:36:50.843622   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0807 18:36:50.843622   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0807 18:36:50.890611   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0807 18:36:50.890611   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0807 18:36:50.936610   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0807 18:36:50.937084   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0807 18:36:50.983572   12940 provision.go:87] duration metric: took 15.0540031s to configureAuth
	I0807 18:36:50.983572   12940 buildroot.go:189] setting minikube options for container-runtime
	I0807 18:36:50.984602   12940 config.go:182] Loaded profile config "ha-766300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 18:36:50.984668   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m02 ).state
	I0807 18:36:53.184719   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:36:53.184719   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:36:53.185055   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 18:36:55.896096   12940 main.go:141] libmachine: [stdout =====>] : 172.28.238.183
	
	I0807 18:36:55.896096   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:36:55.904290   12940 main.go:141] libmachine: Using SSH client type: native
	I0807 18:36:55.904921   12940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.238.183 22 <nil> <nil>}
	I0807 18:36:55.904921   12940 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0807 18:36:56.046427   12940 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0807 18:36:56.046493   12940 buildroot.go:70] root file system type: tmpfs
	I0807 18:36:56.046710   12940 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0807 18:36:56.046785   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m02 ).state
	I0807 18:36:58.320104   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:36:58.320104   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:36:58.320915   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 18:37:01.020160   12940 main.go:141] libmachine: [stdout =====>] : 172.28.238.183
	
	I0807 18:37:01.020358   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:37:01.026610   12940 main.go:141] libmachine: Using SSH client type: native
	I0807 18:37:01.026857   12940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.238.183 22 <nil> <nil>}
	I0807 18:37:01.026857   12940 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.28.224.88"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0807 18:37:01.197751   12940 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.28.224.88
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0807 18:37:01.197751   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m02 ).state
	I0807 18:37:03.432200   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:37:03.432200   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:37:03.432837   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 18:37:06.109391   12940 main.go:141] libmachine: [stdout =====>] : 172.28.238.183
	
	I0807 18:37:06.109391   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:37:06.116193   12940 main.go:141] libmachine: Using SSH client type: native
	I0807 18:37:06.116941   12940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.238.183 22 <nil> <nil>}
	I0807 18:37:06.116995   12940 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0807 18:37:08.375183   12940 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0807 18:37:08.375183   12940 machine.go:97] duration metric: took 47.5955049s to provisionDockerMachine
	I0807 18:37:08.375263   12940 client.go:171] duration metric: took 2m0.4078981s to LocalClient.Create
	I0807 18:37:08.375263   12940 start.go:167] duration metric: took 2m0.4080549s to libmachine.API.Create "ha-766300"
	I0807 18:37:08.375373   12940 start.go:293] postStartSetup for "ha-766300-m02" (driver="hyperv")
	I0807 18:37:08.375410   12940 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0807 18:37:08.388818   12940 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0807 18:37:08.388818   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m02 ).state
	I0807 18:37:10.592319   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:37:10.592319   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:37:10.592564   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 18:37:13.281666   12940 main.go:141] libmachine: [stdout =====>] : 172.28.238.183
	
	I0807 18:37:13.281666   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:37:13.281666   12940 sshutil.go:53] new ssh client: &{IP:172.28.238.183 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-766300-m02\id_rsa Username:docker}
	I0807 18:37:13.391933   12940 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.0030511s)
	I0807 18:37:13.405868   12940 ssh_runner.go:195] Run: cat /etc/os-release
	I0807 18:37:13.412858   12940 info.go:137] Remote host: Buildroot 2023.02.9
	I0807 18:37:13.412858   12940 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0807 18:37:13.413027   12940 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0807 18:37:13.414724   12940 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96602.pem -> 96602.pem in /etc/ssl/certs
	I0807 18:37:13.414724   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96602.pem -> /etc/ssl/certs/96602.pem
	I0807 18:37:13.428993   12940 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0807 18:37:13.448043   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96602.pem --> /etc/ssl/certs/96602.pem (1708 bytes)
	I0807 18:37:13.490279   12940 start.go:296] duration metric: took 5.1148404s for postStartSetup
	I0807 18:37:13.494571   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m02 ).state
	I0807 18:37:15.694816   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:37:15.695087   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:37:15.695153   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 18:37:18.322088   12940 main.go:141] libmachine: [stdout =====>] : 172.28.238.183
	
	I0807 18:37:18.322342   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:37:18.322342   12940 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\config.json ...
	I0807 18:37:18.325052   12940 start.go:128] duration metric: took 2m10.3607544s to createHost
	I0807 18:37:18.325052   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m02 ).state
	I0807 18:37:20.540626   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:37:20.540626   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:37:20.540626   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 18:37:23.160702   12940 main.go:141] libmachine: [stdout =====>] : 172.28.238.183
	
	I0807 18:37:23.160845   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:37:23.166064   12940 main.go:141] libmachine: Using SSH client type: native
	I0807 18:37:23.166782   12940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.238.183 22 <nil> <nil>}
	I0807 18:37:23.166877   12940 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0807 18:37:23.300148   12940 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723055843.320326868
	
	I0807 18:37:23.300213   12940 fix.go:216] guest clock: 1723055843.320326868
	I0807 18:37:23.300314   12940 fix.go:229] Guest: 2024-08-07 18:37:23.320326868 +0000 UTC Remote: 2024-08-07 18:37:18.3250521 +0000 UTC m=+347.324070901 (delta=4.995274768s)
	I0807 18:37:23.300446   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m02 ).state
	I0807 18:37:25.496425   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:37:25.496425   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:37:25.496673   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 18:37:28.158274   12940 main.go:141] libmachine: [stdout =====>] : 172.28.238.183
	
	I0807 18:37:28.158274   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:37:28.164942   12940 main.go:141] libmachine: Using SSH client type: native
	I0807 18:37:28.165608   12940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.238.183 22 <nil> <nil>}
	I0807 18:37:28.165694   12940 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1723055843
	I0807 18:37:28.317585   12940 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Aug  7 18:37:23 UTC 2024
	
	I0807 18:37:28.317585   12940 fix.go:236] clock set: Wed Aug  7 18:37:23 UTC 2024
	 (err=<nil>)
	I0807 18:37:28.317585   12940 start.go:83] releasing machines lock for "ha-766300-m02", held for 2m20.3532422s
	I0807 18:37:28.318151   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m02 ).state
	I0807 18:37:30.547186   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:37:30.547186   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:37:30.547484   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 18:37:33.205575   12940 main.go:141] libmachine: [stdout =====>] : 172.28.238.183
	
	I0807 18:37:33.206611   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:37:33.209666   12940 out.go:177] * Found network options:
	I0807 18:37:33.212516   12940 out.go:177]   - NO_PROXY=172.28.224.88
	W0807 18:37:33.214784   12940 proxy.go:119] fail to check proxy env: Error ip not in block
	I0807 18:37:33.216848   12940 out.go:177]   - NO_PROXY=172.28.224.88
	W0807 18:37:33.218923   12940 proxy.go:119] fail to check proxy env: Error ip not in block
	W0807 18:37:33.221159   12940 proxy.go:119] fail to check proxy env: Error ip not in block
	I0807 18:37:33.222553   12940 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0807 18:37:33.223491   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m02 ).state
	I0807 18:37:33.232804   12940 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0807 18:37:33.232804   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m02 ).state
	I0807 18:37:35.483076   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:37:35.483235   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:37:35.483345   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 18:37:35.491004   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:37:35.491954   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:37:35.491954   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 18:37:38.237351   12940 main.go:141] libmachine: [stdout =====>] : 172.28.238.183
	
	I0807 18:37:38.238353   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:37:38.238723   12940 sshutil.go:53] new ssh client: &{IP:172.28.238.183 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-766300-m02\id_rsa Username:docker}
	I0807 18:37:38.261652   12940 main.go:141] libmachine: [stdout =====>] : 172.28.238.183
	
	I0807 18:37:38.261652   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:37:38.262051   12940 sshutil.go:53] new ssh client: &{IP:172.28.238.183 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-766300-m02\id_rsa Username:docker}
	I0807 18:37:38.333371   12940 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.1005017s)
	W0807 18:37:38.333491   12940 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0807 18:37:38.345580   12940 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0807 18:37:38.350756   12940 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.1281371s)
	W0807 18:37:38.350756   12940 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0807 18:37:38.378767   12940 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0807 18:37:38.378796   12940 start.go:495] detecting cgroup driver to use...
	I0807 18:37:38.378796   12940 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0807 18:37:38.430948   12940 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0807 18:37:38.468151   12940 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	W0807 18:37:38.476259   12940 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0807 18:37:38.476259   12940 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0807 18:37:38.492232   12940 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0807 18:37:38.506604   12940 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0807 18:37:38.539977   12940 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0807 18:37:38.571903   12940 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0807 18:37:38.602927   12940 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0807 18:37:38.637284   12940 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0807 18:37:38.670638   12940 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0807 18:37:38.705179   12940 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0807 18:37:38.739356   12940 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0807 18:37:38.774705   12940 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0807 18:37:38.806873   12940 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0807 18:37:38.840678   12940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 18:37:39.046613   12940 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0807 18:37:39.080927   12940 start.go:495] detecting cgroup driver to use...
	I0807 18:37:39.093528   12940 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0807 18:37:39.138387   12940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0807 18:37:39.177007   12940 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0807 18:37:39.222659   12940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0807 18:37:39.263855   12940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0807 18:37:39.301378   12940 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0807 18:37:39.356376   12940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0807 18:37:39.379997   12940 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0807 18:37:39.427757   12940 ssh_runner.go:195] Run: which cri-dockerd
	I0807 18:37:39.445554   12940 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0807 18:37:39.463279   12940 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0807 18:37:39.506445   12940 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0807 18:37:39.710452   12940 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0807 18:37:39.921210   12940 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0807 18:37:39.921338   12940 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0807 18:37:39.969683   12940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 18:37:40.175180   12940 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0807 18:37:42.775498   12940 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6002846s)
	I0807 18:37:42.787266   12940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0807 18:37:42.824462   12940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0807 18:37:42.858786   12940 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0807 18:37:43.055542   12940 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0807 18:37:43.263136   12940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 18:37:43.463366   12940 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0807 18:37:43.503986   12940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0807 18:37:43.537989   12940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 18:37:43.731198   12940 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0807 18:37:43.843990   12940 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0807 18:37:43.855144   12940 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0807 18:37:43.864429   12940 start.go:563] Will wait 60s for crictl version
	I0807 18:37:43.875289   12940 ssh_runner.go:195] Run: which crictl
	I0807 18:37:43.894303   12940 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0807 18:37:43.947656   12940 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.1
	RuntimeApiVersion:  v1
	I0807 18:37:43.955344   12940 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0807 18:37:44.001272   12940 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0807 18:37:44.041209   12940 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...
	I0807 18:37:44.044251   12940 out.go:177]   - env NO_PROXY=172.28.224.88
	I0807 18:37:44.047177   12940 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0807 18:37:44.051473   12940 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0807 18:37:44.051473   12940 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0807 18:37:44.051473   12940 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0807 18:37:44.051473   12940 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:f6:3a:6a Flags:up|broadcast|multicast|running}
	I0807 18:37:44.054209   12940 ip.go:210] interface addr: fe80::e7eb:b592:d388:ff99/64
	I0807 18:37:44.054209   12940 ip.go:210] interface addr: 172.28.224.1/20
	I0807 18:37:44.064219   12940 ssh_runner.go:195] Run: grep 172.28.224.1	host.minikube.internal$ /etc/hosts
	I0807 18:37:44.070307   12940 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.28.224.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0807 18:37:44.091956   12940 mustload.go:65] Loading cluster: ha-766300
	I0807 18:37:44.092908   12940 config.go:182] Loaded profile config "ha-766300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 18:37:44.093863   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300 ).state
	I0807 18:37:46.272572   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:37:46.272572   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:37:46.272958   12940 host.go:66] Checking if "ha-766300" exists ...
	I0807 18:37:46.273872   12940 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300 for IP: 172.28.238.183
	I0807 18:37:46.273872   12940 certs.go:194] generating shared ca certs ...
	I0807 18:37:46.273872   12940 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 18:37:46.274452   12940 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0807 18:37:46.274796   12940 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0807 18:37:46.274796   12940 certs.go:256] generating profile certs ...
	I0807 18:37:46.275524   12940 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\client.key
	I0807 18:37:46.275524   12940 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\apiserver.key.489d5c54
	I0807 18:37:46.275524   12940 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\apiserver.crt.489d5c54 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.28.224.88 172.28.238.183 172.28.239.254]
	I0807 18:37:46.512734   12940 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\apiserver.crt.489d5c54 ...
	I0807 18:37:46.512734   12940 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\apiserver.crt.489d5c54: {Name:mk4a736e66d978df518f4811a6b19be15d696196 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 18:37:46.514568   12940 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\apiserver.key.489d5c54 ...
	I0807 18:37:46.514568   12940 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\apiserver.key.489d5c54: {Name:mk835bd6912ea9cf8ea8bcda18b1c4d6981c24bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 18:37:46.515232   12940 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\apiserver.crt.489d5c54 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\apiserver.crt
	I0807 18:37:46.530105   12940 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\apiserver.key.489d5c54 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\apiserver.key
	I0807 18:37:46.531305   12940 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\proxy-client.key
	I0807 18:37:46.531863   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0807 18:37:46.531863   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0807 18:37:46.532158   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0807 18:37:46.532158   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0807 18:37:46.532476   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0807 18:37:46.532632   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0807 18:37:46.533507   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0807 18:37:46.533736   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0807 18:37:46.533736   12940 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9660.pem (1338 bytes)
	W0807 18:37:46.534314   12940 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9660_empty.pem, impossibly tiny 0 bytes
	I0807 18:37:46.534603   12940 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0807 18:37:46.534925   12940 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0807 18:37:46.535269   12940 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0807 18:37:46.535409   12940 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0807 18:37:46.535950   12940 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96602.pem (1708 bytes)
	I0807 18:37:46.536226   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96602.pem -> /usr/share/ca-certificates/96602.pem
	I0807 18:37:46.536226   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0807 18:37:46.536226   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9660.pem -> /usr/share/ca-certificates/9660.pem
	I0807 18:37:46.536226   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300 ).state
	I0807 18:37:48.740698   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:37:48.740698   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:37:48.741539   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300 ).networkadapters[0]).ipaddresses[0]
	I0807 18:37:51.392837   12940 main.go:141] libmachine: [stdout =====>] : 172.28.224.88
	
	I0807 18:37:51.393064   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:37:51.393470   12940 sshutil.go:53] new ssh client: &{IP:172.28.224.88 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-766300\id_rsa Username:docker}
	I0807 18:37:51.491083   12940 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0807 18:37:51.499745   12940 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0807 18:37:51.533372   12940 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0807 18:37:51.540914   12940 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0807 18:37:51.573994   12940 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0807 18:37:51.585414   12940 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0807 18:37:51.620322   12940 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0807 18:37:51.627244   12940 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0807 18:37:51.659349   12940 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0807 18:37:51.665862   12940 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0807 18:37:51.696890   12940 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0807 18:37:51.703623   12940 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0807 18:37:51.724486   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0807 18:37:51.775946   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0807 18:37:51.820104   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0807 18:37:51.866057   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0807 18:37:51.912900   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0807 18:37:51.968238   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0807 18:37:52.016755   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0807 18:37:52.059757   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0807 18:37:52.106704   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96602.pem --> /usr/share/ca-certificates/96602.pem (1708 bytes)
	I0807 18:37:52.156192   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0807 18:37:52.204932   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9660.pem --> /usr/share/ca-certificates/9660.pem (1338 bytes)
	I0807 18:37:52.252496   12940 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0807 18:37:52.288772   12940 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0807 18:37:52.318688   12940 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0807 18:37:52.354162   12940 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0807 18:37:52.385177   12940 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0807 18:37:52.415596   12940 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0807 18:37:52.446524   12940 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0807 18:37:52.489354   12940 ssh_runner.go:195] Run: openssl version
	I0807 18:37:52.507700   12940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/96602.pem && ln -fs /usr/share/ca-certificates/96602.pem /etc/ssl/certs/96602.pem"
	I0807 18:37:52.541699   12940 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/96602.pem
	I0807 18:37:52.552710   12940 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  7 17:50 /usr/share/ca-certificates/96602.pem
	I0807 18:37:52.565240   12940 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/96602.pem
	I0807 18:37:52.585753   12940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/96602.pem /etc/ssl/certs/3ec20f2e.0"
	I0807 18:37:52.616903   12940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0807 18:37:52.651089   12940 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0807 18:37:52.658120   12940 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  7 17:33 /usr/share/ca-certificates/minikubeCA.pem
	I0807 18:37:52.671306   12940 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0807 18:37:52.690559   12940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0807 18:37:52.721170   12940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9660.pem && ln -fs /usr/share/ca-certificates/9660.pem /etc/ssl/certs/9660.pem"
	I0807 18:37:52.757190   12940 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9660.pem
	I0807 18:37:52.764942   12940 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  7 17:50 /usr/share/ca-certificates/9660.pem
	I0807 18:37:52.778014   12940 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9660.pem
	I0807 18:37:52.797785   12940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9660.pem /etc/ssl/certs/51391683.0"
	I0807 18:37:52.827163   12940 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0807 18:37:52.833238   12940 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0807 18:37:52.833238   12940 kubeadm.go:934] updating node {m02 172.28.238.183 8443 v1.30.3 docker true true} ...
	I0807 18:37:52.833777   12940 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-766300-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.28.238.183
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-766300 Namespace:default APIServerHAVIP:172.28.239.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0807 18:37:52.833777   12940 kube-vip.go:115] generating kube-vip config ...
	I0807 18:37:52.845315   12940 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0807 18:37:52.869424   12940 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0807 18:37:52.869424   12940 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.28.239.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0807 18:37:52.880941   12940 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0807 18:37:52.899765   12940 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0807 18:37:52.911814   12940 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0807 18:37:52.932980   12940 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubeadm
	I0807 18:37:52.932980   12940 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubelet
	I0807 18:37:52.932980   12940 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubectl
	I0807 18:37:53.977760   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0807 18:37:53.989575   12940 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0807 18:37:53.991049   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0807 18:37:53.997350   12940 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0807 18:37:53.997350   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0807 18:37:54.011120   12940 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0807 18:37:54.067984   12940 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0807 18:37:54.067984   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0807 18:37:58.863746   12940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0807 18:37:58.892336   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0807 18:37:58.904621   12940 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0807 18:37:58.911138   12940 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0807 18:37:58.911138   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0807 18:37:59.582059   12940 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0807 18:37:59.600435   12940 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0807 18:37:59.634849   12940 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0807 18:37:59.669321   12940 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0807 18:37:59.718455   12940 ssh_runner.go:195] Run: grep 172.28.239.254	control-plane.minikube.internal$ /etc/hosts
	I0807 18:37:59.724801   12940 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.28.239.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0807 18:37:59.759709   12940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 18:37:59.961328   12940 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0807 18:37:59.992768   12940 host.go:66] Checking if "ha-766300" exists ...
	I0807 18:37:59.993394   12940 start.go:317] joinCluster: &{Name:ha-766300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-766300 Namespace:default APIServerHAVIP:172.28.239.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.224.88 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.238.183 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertEx
piration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 18:37:59.993394   12940 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0807 18:37:59.993394   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300 ).state
	I0807 18:38:02.202478   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:38:02.202478   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:38:02.202478   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300 ).networkadapters[0]).ipaddresses[0]
	I0807 18:38:04.843622   12940 main.go:141] libmachine: [stdout =====>] : 172.28.224.88
	
	I0807 18:38:04.843622   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:38:04.843970   12940 sshutil.go:53] new ssh client: &{IP:172.28.224.88 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-766300\id_rsa Username:docker}
	I0807 18:38:05.262440   12940 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0": (5.2689779s)
	I0807 18:38:05.262440   12940 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:172.28.238.183 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0807 18:38:05.262440   12940 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token l4io45.pb2zkt4q5s62d1mj --discovery-token-ca-cert-hash sha256:54111b4769238fc338d0511ba57c177f732a6d68c0b9cd0aa8cacbbf3c79643b --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-766300-m02 --control-plane --apiserver-advertise-address=172.28.238.183 --apiserver-bind-port=8443"
	I0807 18:38:50.109574   12940 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token l4io45.pb2zkt4q5s62d1mj --discovery-token-ca-cert-hash sha256:54111b4769238fc338d0511ba57c177f732a6d68c0b9cd0aa8cacbbf3c79643b --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-766300-m02 --control-plane --apiserver-advertise-address=172.28.238.183 --apiserver-bind-port=8443": (44.8465607s)
	I0807 18:38:50.109574   12940 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0807 18:38:50.921581   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-766300-m02 minikube.k8s.io/updated_at=2024_08_07T18_38_50_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e minikube.k8s.io/name=ha-766300 minikube.k8s.io/primary=false
	I0807 18:38:51.106053   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-766300-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0807 18:38:51.262899   12940 start.go:319] duration metric: took 51.2688489s to joinCluster
	I0807 18:38:51.262989   12940 start.go:235] Will wait 6m0s for node &{Name:m02 IP:172.28.238.183 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0807 18:38:51.263754   12940 config.go:182] Loaded profile config "ha-766300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 18:38:51.265982   12940 out.go:177] * Verifying Kubernetes components...
	I0807 18:38:51.282793   12940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 18:38:51.685191   12940 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0807 18:38:51.728046   12940 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0807 18:38:51.729177   12940 kapi.go:59] client config for ha-766300: &rest.Config{Host:"https://172.28.239.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-766300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-766300\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1da64c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0807 18:38:51.729177   12940 kubeadm.go:483] Overriding stale ClientConfig host https://172.28.239.254:8443 with https://172.28.224.88:8443
	I0807 18:38:51.730381   12940 node_ready.go:35] waiting up to 6m0s for node "ha-766300-m02" to be "Ready" ...
	I0807 18:38:51.730711   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:38:51.730745   12940 round_trippers.go:469] Request Headers:
	I0807 18:38:51.730745   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:38:51.730799   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:38:51.749534   12940 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0807 18:38:52.246251   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:38:52.246251   12940 round_trippers.go:469] Request Headers:
	I0807 18:38:52.246251   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:38:52.246251   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:38:52.253461   12940 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0807 18:38:52.739537   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:38:52.739537   12940 round_trippers.go:469] Request Headers:
	I0807 18:38:52.739537   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:38:52.739537   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:38:52.746118   12940 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0807 18:38:53.245058   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:38:53.245058   12940 round_trippers.go:469] Request Headers:
	I0807 18:38:53.245058   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:38:53.245058   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:38:53.250333   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:38:53.735157   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:38:53.735157   12940 round_trippers.go:469] Request Headers:
	I0807 18:38:53.735157   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:38:53.735157   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:38:53.739790   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:38:53.742073   12940 node_ready.go:53] node "ha-766300-m02" has status "Ready":"False"
	I0807 18:38:54.243095   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:38:54.243095   12940 round_trippers.go:469] Request Headers:
	I0807 18:38:54.243179   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:38:54.243179   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:38:54.247723   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:38:54.736141   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:38:54.736141   12940 round_trippers.go:469] Request Headers:
	I0807 18:38:54.736141   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:38:54.736141   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:38:54.742732   12940 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0807 18:38:55.242716   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:38:55.242716   12940 round_trippers.go:469] Request Headers:
	I0807 18:38:55.242716   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:38:55.242716   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:38:55.247041   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:38:55.746317   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:38:55.746317   12940 round_trippers.go:469] Request Headers:
	I0807 18:38:55.746317   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:38:55.746317   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:38:55.752566   12940 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0807 18:38:55.753470   12940 node_ready.go:53] node "ha-766300-m02" has status "Ready":"False"
	I0807 18:38:56.237912   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:38:56.238143   12940 round_trippers.go:469] Request Headers:
	I0807 18:38:56.238176   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:38:56.238176   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:38:56.379430   12940 round_trippers.go:574] Response Status: 200 OK in 141 milliseconds
	I0807 18:38:56.745325   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:38:56.745325   12940 round_trippers.go:469] Request Headers:
	I0807 18:38:56.745325   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:38:56.745325   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:38:56.749026   12940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:38:57.234342   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:38:57.234593   12940 round_trippers.go:469] Request Headers:
	I0807 18:38:57.234593   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:38:57.234593   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:38:57.242914   12940 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0807 18:38:57.739757   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:38:57.739842   12940 round_trippers.go:469] Request Headers:
	I0807 18:38:57.739842   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:38:57.739842   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:38:58.087924   12940 round_trippers.go:574] Response Status: 200 OK in 348 milliseconds
	I0807 18:38:58.088936   12940 node_ready.go:53] node "ha-766300-m02" has status "Ready":"False"
	I0807 18:38:58.242640   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:38:58.242640   12940 round_trippers.go:469] Request Headers:
	I0807 18:38:58.242640   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:38:58.242640   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:38:58.271534   12940 round_trippers.go:574] Response Status: 200 OK in 28 milliseconds
	I0807 18:38:58.732766   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:38:58.732766   12940 round_trippers.go:469] Request Headers:
	I0807 18:38:58.732766   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:38:58.732766   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:38:58.738355   12940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 18:38:59.236476   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:38:59.236557   12940 round_trippers.go:469] Request Headers:
	I0807 18:38:59.236557   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:38:59.236557   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:38:59.241327   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:38:59.743080   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:38:59.743289   12940 round_trippers.go:469] Request Headers:
	I0807 18:38:59.743289   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:38:59.743289   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:38:59.749049   12940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 18:39:00.232254   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:39:00.232254   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:00.232254   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:00.232254   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:00.240194   12940 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0807 18:39:00.241266   12940 node_ready.go:53] node "ha-766300-m02" has status "Ready":"False"
	I0807 18:39:00.733817   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:39:00.733917   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:00.733917   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:00.733917   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:00.740940   12940 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0807 18:39:01.233487   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:39:01.233487   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:01.233658   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:01.233658   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:01.241793   12940 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0807 18:39:01.746809   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:39:01.746809   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:01.746809   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:01.746809   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:01.751404   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:39:02.233065   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:39:02.233065   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:02.233189   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:02.233189   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:02.247897   12940 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0807 18:39:02.248899   12940 node_ready.go:53] node "ha-766300-m02" has status "Ready":"False"
	I0807 18:39:02.737635   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:39:02.737635   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:02.737635   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:02.737635   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:02.743233   12940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 18:39:03.231549   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:39:03.231645   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:03.231645   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:03.231645   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:03.236609   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:39:03.741192   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:39:03.741192   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:03.741192   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:03.741192   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:03.746253   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:39:04.244598   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:39:04.244683   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:04.244683   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:04.244683   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:04.249322   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:39:04.249916   12940 node_ready.go:53] node "ha-766300-m02" has status "Ready":"False"
	I0807 18:39:04.742767   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:39:04.742767   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:04.743122   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:04.743122   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:04.750066   12940 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0807 18:39:05.246560   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:39:05.246560   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:05.246560   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:05.246674   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:05.251801   12940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 18:39:05.735472   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:39:05.735472   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:05.735472   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:05.735472   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:05.749698   12940 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0807 18:39:06.238904   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:39:06.239208   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:06.239208   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:06.239330   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:06.246952   12940 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0807 18:39:06.740843   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:39:06.740843   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:06.740843   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:06.740843   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:06.747242   12940 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0807 18:39:06.748189   12940 node_ready.go:53] node "ha-766300-m02" has status "Ready":"False"
	I0807 18:39:07.244581   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:39:07.244581   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:07.244684   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:07.244684   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:07.249655   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:39:07.741975   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:39:07.741975   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:07.741975   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:07.741975   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:07.746745   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:39:08.244945   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:39:08.244945   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:08.244945   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:08.244945   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:08.250140   12940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 18:39:08.731367   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:39:08.731367   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:08.731610   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:08.731610   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:08.740747   12940 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0807 18:39:09.245973   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:39:09.245973   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:09.245973   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:09.245973   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:09.252610   12940 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0807 18:39:09.253412   12940 node_ready.go:53] node "ha-766300-m02" has status "Ready":"False"
	I0807 18:39:09.745943   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:39:09.745943   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:09.745943   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:09.745943   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:09.750846   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:39:10.231151   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:39:10.231255   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:10.231255   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:10.231255   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:10.235536   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:39:10.743625   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:39:10.743625   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:10.743625   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:10.743625   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:10.749213   12940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 18:39:11.245292   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:39:11.245292   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:11.245292   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:11.245292   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:11.249646   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:39:11.745246   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:39:11.745314   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:11.745314   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:11.745314   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:11.751051   12940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 18:39:11.752600   12940 node_ready.go:53] node "ha-766300-m02" has status "Ready":"False"
	I0807 18:39:12.231429   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:39:12.231486   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:12.231486   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:12.231486   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:12.238858   12940 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0807 18:39:12.734384   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:39:12.734384   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:12.734576   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:12.734576   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:12.739469   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:39:13.235743   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:39:13.235743   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:13.235743   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:13.235876   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:13.244191   12940 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0807 18:39:13.735580   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:39:13.735644   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:13.735644   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:13.735644   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:13.741413   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:39:14.238579   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:39:14.238579   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:14.238579   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:14.238579   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:14.243252   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:39:14.245271   12940 node_ready.go:49] node "ha-766300-m02" has status "Ready":"True"
	I0807 18:39:14.245271   12940 node_ready.go:38] duration metric: took 22.5145588s for node "ha-766300-m02" to be "Ready" ...
	I0807 18:39:14.245271   12940 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0807 18:39:14.245566   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods
	I0807 18:39:14.245586   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:14.245586   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:14.245586   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:14.253871   12940 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0807 18:39:14.263554   12940 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-9tjv6" in "kube-system" namespace to be "Ready" ...
	I0807 18:39:14.263554   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-9tjv6
	I0807 18:39:14.263554   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:14.263554   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:14.263554   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:14.267712   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:39:14.268836   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300
	I0807 18:39:14.269430   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:14.269700   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:14.269881   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:14.282860   12940 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0807 18:39:14.283691   12940 pod_ready.go:92] pod "coredns-7db6d8ff4d-9tjv6" in "kube-system" namespace has status "Ready":"True"
	I0807 18:39:14.283747   12940 pod_ready.go:81] duration metric: took 20.1928ms for pod "coredns-7db6d8ff4d-9tjv6" in "kube-system" namespace to be "Ready" ...
	I0807 18:39:14.283747   12940 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-fqjwg" in "kube-system" namespace to be "Ready" ...
	I0807 18:39:14.283892   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fqjwg
	I0807 18:39:14.283923   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:14.283923   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:14.283923   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:14.288664   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:39:14.290246   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300
	I0807 18:39:14.290300   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:14.290300   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:14.290352   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:14.296251   12940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 18:39:14.296978   12940 pod_ready.go:92] pod "coredns-7db6d8ff4d-fqjwg" in "kube-system" namespace has status "Ready":"True"
	I0807 18:39:14.296978   12940 pod_ready.go:81] duration metric: took 13.2299ms for pod "coredns-7db6d8ff4d-fqjwg" in "kube-system" namespace to be "Ready" ...
	I0807 18:39:14.296978   12940 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-766300" in "kube-system" namespace to be "Ready" ...
	I0807 18:39:14.296978   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/etcd-ha-766300
	I0807 18:39:14.296978   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:14.296978   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:14.296978   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:14.301437   12940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:39:14.302377   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300
	I0807 18:39:14.302477   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:14.302477   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:14.302477   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:14.305735   12940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:39:14.307023   12940 pod_ready.go:92] pod "etcd-ha-766300" in "kube-system" namespace has status "Ready":"True"
	I0807 18:39:14.307088   12940 pod_ready.go:81] duration metric: took 10.1102ms for pod "etcd-ha-766300" in "kube-system" namespace to be "Ready" ...
	I0807 18:39:14.307088   12940 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-766300-m02" in "kube-system" namespace to be "Ready" ...
	I0807 18:39:14.307207   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/etcd-ha-766300-m02
	I0807 18:39:14.307207   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:14.307267   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:14.307267   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:14.310704   12940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:39:14.311747   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:39:14.311747   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:14.311834   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:14.311834   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:14.315364   12940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:39:14.316926   12940 pod_ready.go:92] pod "etcd-ha-766300-m02" in "kube-system" namespace has status "Ready":"True"
	I0807 18:39:14.316926   12940 pod_ready.go:81] duration metric: took 9.8379ms for pod "etcd-ha-766300-m02" in "kube-system" namespace to be "Ready" ...
	I0807 18:39:14.316926   12940 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-766300" in "kube-system" namespace to be "Ready" ...
	I0807 18:39:14.440471   12940 request.go:629] Waited for 123.2095ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-766300
	I0807 18:39:14.440547   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-766300
	I0807 18:39:14.440547   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:14.440580   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:14.440580   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:14.446805   12940 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0807 18:39:14.645687   12940 request.go:629] Waited for 196.1117ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/nodes/ha-766300
	I0807 18:39:14.646102   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300
	I0807 18:39:14.646102   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:14.646102   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:14.646102   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:14.650903   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:39:14.651827   12940 pod_ready.go:92] pod "kube-apiserver-ha-766300" in "kube-system" namespace has status "Ready":"True"
	I0807 18:39:14.651929   12940 pod_ready.go:81] duration metric: took 334.999ms for pod "kube-apiserver-ha-766300" in "kube-system" namespace to be "Ready" ...
	I0807 18:39:14.651929   12940 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-766300-m02" in "kube-system" namespace to be "Ready" ...
	I0807 18:39:14.850199   12940 request.go:629] Waited for 197.6715ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-766300-m02
	I0807 18:39:14.850285   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-766300-m02
	I0807 18:39:14.850365   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:14.850365   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:14.850540   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:14.856076   12940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 18:39:15.052744   12940 request.go:629] Waited for 195.6101ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:39:15.052831   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:39:15.052831   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:15.052901   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:15.052901   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:15.063562   12940 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0807 18:39:15.064674   12940 pod_ready.go:92] pod "kube-apiserver-ha-766300-m02" in "kube-system" namespace has status "Ready":"True"
	I0807 18:39:15.064734   12940 pod_ready.go:81] duration metric: took 412.7994ms for pod "kube-apiserver-ha-766300-m02" in "kube-system" namespace to be "Ready" ...
	I0807 18:39:15.064787   12940 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-766300" in "kube-system" namespace to be "Ready" ...
	I0807 18:39:15.239176   12940 request.go:629] Waited for 174.167ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-766300
	I0807 18:39:15.239176   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-766300
	I0807 18:39:15.239176   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:15.239459   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:15.239459   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:15.244373   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:39:15.443606   12940 request.go:629] Waited for 197.7794ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/nodes/ha-766300
	I0807 18:39:15.443765   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300
	I0807 18:39:15.443765   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:15.443765   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:15.443765   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:15.448658   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:39:15.450479   12940 pod_ready.go:92] pod "kube-controller-manager-ha-766300" in "kube-system" namespace has status "Ready":"True"
	I0807 18:39:15.450555   12940 pod_ready.go:81] duration metric: took 385.7627ms for pod "kube-controller-manager-ha-766300" in "kube-system" namespace to be "Ready" ...
	I0807 18:39:15.450555   12940 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-766300-m02" in "kube-system" namespace to be "Ready" ...
	I0807 18:39:15.647228   12940 request.go:629] Waited for 196.4263ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-766300-m02
	I0807 18:39:15.647392   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-766300-m02
	I0807 18:39:15.647392   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:15.647392   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:15.647392   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:15.652244   12940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:39:15.851763   12940 request.go:629] Waited for 199.2773ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:39:15.851911   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:39:15.851911   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:15.851911   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:15.851967   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:15.856337   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:39:15.857900   12940 pod_ready.go:92] pod "kube-controller-manager-ha-766300-m02" in "kube-system" namespace has status "Ready":"True"
	I0807 18:39:15.857968   12940 pod_ready.go:81] duration metric: took 407.3359ms for pod "kube-controller-manager-ha-766300-m02" in "kube-system" namespace to be "Ready" ...
	I0807 18:39:15.857968   12940 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8v6vm" in "kube-system" namespace to be "Ready" ...
	I0807 18:39:16.040099   12940 request.go:629] Waited for 181.8172ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8v6vm
	I0807 18:39:16.040204   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8v6vm
	I0807 18:39:16.040204   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:16.040289   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:16.040289   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:16.046199   12940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 18:39:16.242532   12940 request.go:629] Waited for 194.9787ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:39:16.242873   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:39:16.242873   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:16.242908   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:16.242908   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:16.247518   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:39:16.249056   12940 pod_ready.go:92] pod "kube-proxy-8v6vm" in "kube-system" namespace has status "Ready":"True"
	I0807 18:39:16.249056   12940 pod_ready.go:81] duration metric: took 391.083ms for pod "kube-proxy-8v6vm" in "kube-system" namespace to be "Ready" ...
	I0807 18:39:16.249165   12940 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-d6ckx" in "kube-system" namespace to be "Ready" ...
	I0807 18:39:16.444988   12940 request.go:629] Waited for 195.4584ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d6ckx
	I0807 18:39:16.445208   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d6ckx
	I0807 18:39:16.445208   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:16.445285   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:16.445285   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:16.453483   12940 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0807 18:39:16.649838   12940 request.go:629] Waited for 195.4687ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/nodes/ha-766300
	I0807 18:39:16.650232   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300
	I0807 18:39:16.650232   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:16.650232   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:16.650232   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:16.655334   12940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 18:39:16.656316   12940 pod_ready.go:92] pod "kube-proxy-d6ckx" in "kube-system" namespace has status "Ready":"True"
	I0807 18:39:16.656316   12940 pod_ready.go:81] duration metric: took 407.1453ms for pod "kube-proxy-d6ckx" in "kube-system" namespace to be "Ready" ...
	I0807 18:39:16.656316   12940 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-766300" in "kube-system" namespace to be "Ready" ...
	I0807 18:39:16.854043   12940 request.go:629] Waited for 197.5552ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-766300
	I0807 18:39:16.854210   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-766300
	I0807 18:39:16.854210   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:16.854210   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:16.854351   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:16.858752   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:39:17.042139   12940 request.go:629] Waited for 182.0486ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/nodes/ha-766300
	I0807 18:39:17.042502   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300
	I0807 18:39:17.042665   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:17.042665   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:17.042665   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:17.048101   12940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 18:39:17.049567   12940 pod_ready.go:92] pod "kube-scheduler-ha-766300" in "kube-system" namespace has status "Ready":"True"
	I0807 18:39:17.049567   12940 pod_ready.go:81] duration metric: took 393.246ms for pod "kube-scheduler-ha-766300" in "kube-system" namespace to be "Ready" ...
	I0807 18:39:17.049636   12940 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-766300-m02" in "kube-system" namespace to be "Ready" ...
	I0807 18:39:17.246897   12940 request.go:629] Waited for 196.9126ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-766300-m02
	I0807 18:39:17.247148   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-766300-m02
	I0807 18:39:17.247185   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:17.247208   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:17.247230   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:17.251983   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:39:17.450137   12940 request.go:629] Waited for 196.2691ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:39:17.450254   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:39:17.450254   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:17.450254   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:17.450254   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:17.454709   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:39:17.456365   12940 pod_ready.go:92] pod "kube-scheduler-ha-766300-m02" in "kube-system" namespace has status "Ready":"True"
	I0807 18:39:17.456365   12940 pod_ready.go:81] duration metric: took 406.7239ms for pod "kube-scheduler-ha-766300-m02" in "kube-system" namespace to be "Ready" ...
	I0807 18:39:17.456365   12940 pod_ready.go:38] duration metric: took 3.2110527s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0807 18:39:17.456569   12940 api_server.go:52] waiting for apiserver process to appear ...
	I0807 18:39:17.469128   12940 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0807 18:39:17.498364   12940 api_server.go:72] duration metric: took 26.2350393s to wait for apiserver process to appear ...
	I0807 18:39:17.498499   12940 api_server.go:88] waiting for apiserver healthz status ...
	I0807 18:39:17.498568   12940 api_server.go:253] Checking apiserver healthz at https://172.28.224.88:8443/healthz ...
	I0807 18:39:17.508002   12940 api_server.go:279] https://172.28.224.88:8443/healthz returned 200:
	ok
	I0807 18:39:17.508997   12940 round_trippers.go:463] GET https://172.28.224.88:8443/version
	I0807 18:39:17.508997   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:17.508997   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:17.509092   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:17.510242   12940 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0807 18:39:17.511037   12940 api_server.go:141] control plane version: v1.30.3
	I0807 18:39:17.511119   12940 api_server.go:131] duration metric: took 12.5964ms to wait for apiserver health ...
	I0807 18:39:17.511119   12940 system_pods.go:43] waiting for kube-system pods to appear ...
	I0807 18:39:17.640308   12940 request.go:629] Waited for 128.8747ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods
	I0807 18:39:17.640308   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods
	I0807 18:39:17.640543   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:17.640543   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:17.640543   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:17.648305   12940 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0807 18:39:17.656376   12940 system_pods.go:59] 17 kube-system pods found
	I0807 18:39:17.656376   12940 system_pods.go:61] "coredns-7db6d8ff4d-9tjv6" [54967df0-ac2c-4024-8947-b4e972a4b59a] Running
	I0807 18:39:17.656376   12940 system_pods.go:61] "coredns-7db6d8ff4d-fqjwg" [cc54cc3e-f40c-43c2-ac25-25bd315c3dd9] Running
	I0807 18:39:17.656376   12940 system_pods.go:61] "etcd-ha-766300" [5c619c4a-4fd5-494f-bb7b-80754258d40a] Running
	I0807 18:39:17.656376   12940 system_pods.go:61] "etcd-ha-766300-m02" [97b2b2f2-ea73-4de0-86aa-4854386b8f71] Running
	I0807 18:39:17.656376   12940 system_pods.go:61] "kindnet-gh6wt" [35666307-476d-460d-af1d-23d3bae8aec2] Running
	I0807 18:39:17.656376   12940 system_pods.go:61] "kindnet-scfzz" [ad036ebf-9679-47a6-b8e0-f433a34f55cb] Running
	I0807 18:39:17.656376   12940 system_pods.go:61] "kube-apiserver-ha-766300" [d1f122ef-d89f-4a4f-8194-86e5e84faea4] Running
	I0807 18:39:17.656376   12940 system_pods.go:61] "kube-apiserver-ha-766300-m02" [249c438f-592d-47ba-bf0b-252bde32a27d] Running
	I0807 18:39:17.656376   12940 system_pods.go:61] "kube-controller-manager-ha-766300" [648bbb2b-06b4-487b-a9fa-c530a7ed5d11] Running
	I0807 18:39:17.656376   12940 system_pods.go:61] "kube-controller-manager-ha-766300-m02" [c8ab36c4-89ca-4519-8eaa-c27c00b78095] Running
	I0807 18:39:17.656376   12940 system_pods.go:61] "kube-proxy-8v6vm" [c6fa744a-fc9b-4da6-933a-866565e8318c] Running
	I0807 18:39:17.656376   12940 system_pods.go:61] "kube-proxy-d6ckx" [257858b0-6bb6-4bfb-9b5c-591fdb24929e] Running
	I0807 18:39:17.656376   12940 system_pods.go:61] "kube-scheduler-ha-766300" [1d44914f-67d1-4b8f-934c-273d21dc7d60] Running
	I0807 18:39:17.656376   12940 system_pods.go:61] "kube-scheduler-ha-766300-m02" [22b9a1c1-e369-4270-90f6-f3caa10e0705] Running
	I0807 18:39:17.656376   12940 system_pods.go:61] "kube-vip-ha-766300" [e2b31b5c-6e03-4e58-8cb4-10fc6869812b] Running
	I0807 18:39:17.656953   12940 system_pods.go:61] "kube-vip-ha-766300-m02" [0034d823-e21f-4be0-bbdb-09db13937fb7] Running
	I0807 18:39:17.657063   12940 system_pods.go:61] "storage-provisioner" [9a8a8ca1-bdd6-4ca8-a2d4-de3839223c9c] Running
	I0807 18:39:17.657082   12940 system_pods.go:74] duration metric: took 145.9424ms to wait for pod list to return data ...
	I0807 18:39:17.657116   12940 default_sa.go:34] waiting for default service account to be created ...
	I0807 18:39:17.844035   12940 request.go:629] Waited for 186.9162ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/namespaces/default/serviceaccounts
	I0807 18:39:17.844035   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/namespaces/default/serviceaccounts
	I0807 18:39:17.844035   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:17.844035   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:17.844035   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:17.850013   12940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 18:39:17.850701   12940 default_sa.go:45] found service account: "default"
	I0807 18:39:17.850760   12940 default_sa.go:55] duration metric: took 193.6416ms for default service account to be created ...
	I0807 18:39:17.850760   12940 system_pods.go:116] waiting for k8s-apps to be running ...
	I0807 18:39:18.046880   12940 request.go:629] Waited for 195.5777ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods
	I0807 18:39:18.046880   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods
	I0807 18:39:18.047035   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:18.047035   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:18.047066   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:18.058532   12940 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0807 18:39:18.065181   12940 system_pods.go:86] 17 kube-system pods found
	I0807 18:39:18.065181   12940 system_pods.go:89] "coredns-7db6d8ff4d-9tjv6" [54967df0-ac2c-4024-8947-b4e972a4b59a] Running
	I0807 18:39:18.065181   12940 system_pods.go:89] "coredns-7db6d8ff4d-fqjwg" [cc54cc3e-f40c-43c2-ac25-25bd315c3dd9] Running
	I0807 18:39:18.065181   12940 system_pods.go:89] "etcd-ha-766300" [5c619c4a-4fd5-494f-bb7b-80754258d40a] Running
	I0807 18:39:18.065181   12940 system_pods.go:89] "etcd-ha-766300-m02" [97b2b2f2-ea73-4de0-86aa-4854386b8f71] Running
	I0807 18:39:18.065181   12940 system_pods.go:89] "kindnet-gh6wt" [35666307-476d-460d-af1d-23d3bae8aec2] Running
	I0807 18:39:18.065181   12940 system_pods.go:89] "kindnet-scfzz" [ad036ebf-9679-47a6-b8e0-f433a34f55cb] Running
	I0807 18:39:18.065181   12940 system_pods.go:89] "kube-apiserver-ha-766300" [d1f122ef-d89f-4a4f-8194-86e5e84faea4] Running
	I0807 18:39:18.065181   12940 system_pods.go:89] "kube-apiserver-ha-766300-m02" [249c438f-592d-47ba-bf0b-252bde32a27d] Running
	I0807 18:39:18.065181   12940 system_pods.go:89] "kube-controller-manager-ha-766300" [648bbb2b-06b4-487b-a9fa-c530a7ed5d11] Running
	I0807 18:39:18.065181   12940 system_pods.go:89] "kube-controller-manager-ha-766300-m02" [c8ab36c4-89ca-4519-8eaa-c27c00b78095] Running
	I0807 18:39:18.065181   12940 system_pods.go:89] "kube-proxy-8v6vm" [c6fa744a-fc9b-4da6-933a-866565e8318c] Running
	I0807 18:39:18.065181   12940 system_pods.go:89] "kube-proxy-d6ckx" [257858b0-6bb6-4bfb-9b5c-591fdb24929e] Running
	I0807 18:39:18.065181   12940 system_pods.go:89] "kube-scheduler-ha-766300" [1d44914f-67d1-4b8f-934c-273d21dc7d60] Running
	I0807 18:39:18.065181   12940 system_pods.go:89] "kube-scheduler-ha-766300-m02" [22b9a1c1-e369-4270-90f6-f3caa10e0705] Running
	I0807 18:39:18.065181   12940 system_pods.go:89] "kube-vip-ha-766300" [e2b31b5c-6e03-4e58-8cb4-10fc6869812b] Running
	I0807 18:39:18.065181   12940 system_pods.go:89] "kube-vip-ha-766300-m02" [0034d823-e21f-4be0-bbdb-09db13937fb7] Running
	I0807 18:39:18.065181   12940 system_pods.go:89] "storage-provisioner" [9a8a8ca1-bdd6-4ca8-a2d4-de3839223c9c] Running
	I0807 18:39:18.065181   12940 system_pods.go:126] duration metric: took 214.4184ms to wait for k8s-apps to be running ...
	I0807 18:39:18.065181   12940 system_svc.go:44] waiting for kubelet service to be running ....
	I0807 18:39:18.079966   12940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0807 18:39:18.105345   12940 system_svc.go:56] duration metric: took 40.1628ms WaitForService to wait for kubelet
	I0807 18:39:18.106440   12940 kubeadm.go:582] duration metric: took 26.8430568s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0807 18:39:18.106440   12940 node_conditions.go:102] verifying NodePressure condition ...
	I0807 18:39:18.248371   12940 request.go:629] Waited for 141.637ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/nodes
	I0807 18:39:18.248371   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes
	I0807 18:39:18.248371   12940 round_trippers.go:469] Request Headers:
	I0807 18:39:18.248596   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:39:18.248596   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:39:18.257609   12940 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0807 18:39:18.258646   12940 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0807 18:39:18.258646   12940 node_conditions.go:123] node cpu capacity is 2
	I0807 18:39:18.258646   12940 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0807 18:39:18.258646   12940 node_conditions.go:123] node cpu capacity is 2
	I0807 18:39:18.258646   12940 node_conditions.go:105] duration metric: took 152.1572ms to run NodePressure ...
	I0807 18:39:18.258646   12940 start.go:241] waiting for startup goroutines ...
	I0807 18:39:18.258646   12940 start.go:255] writing updated cluster config ...
	I0807 18:39:18.263168   12940 out.go:177] 
	I0807 18:39:18.277612   12940 config.go:182] Loaded profile config "ha-766300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 18:39:18.277612   12940 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\config.json ...
	I0807 18:39:18.284480   12940 out.go:177] * Starting "ha-766300-m03" control-plane node in "ha-766300" cluster
	I0807 18:39:18.287016   12940 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0807 18:39:18.287080   12940 cache.go:56] Caching tarball of preloaded images
	I0807 18:39:18.287327   12940 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0807 18:39:18.287327   12940 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0807 18:39:18.287933   12940 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\config.json ...
	I0807 18:39:18.293848   12940 start.go:360] acquireMachinesLock for ha-766300-m03: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 18:39:18.294016   12940 start.go:364] duration metric: took 135.4µs to acquireMachinesLock for "ha-766300-m03"
	I0807 18:39:18.294073   12940 start.go:93] Provisioning new machine with config: &{Name:ha-766300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.3 ClusterName:ha-766300 Namespace:default APIServerHAVIP:172.28.239.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.224.88 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.238.183 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0807 18:39:18.294073   12940 start.go:125] createHost starting for "m03" (driver="hyperv")
	I0807 18:39:18.297706   12940 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0807 18:39:18.297890   12940 start.go:159] libmachine.API.Create for "ha-766300" (driver="hyperv")
	I0807 18:39:18.297890   12940 client.go:168] LocalClient.Create starting
	I0807 18:39:18.297890   12940 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0807 18:39:18.298620   12940 main.go:141] libmachine: Decoding PEM data...
	I0807 18:39:18.298620   12940 main.go:141] libmachine: Parsing certificate...
	I0807 18:39:18.298620   12940 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0807 18:39:18.298620   12940 main.go:141] libmachine: Decoding PEM data...
	I0807 18:39:18.298620   12940 main.go:141] libmachine: Parsing certificate...
	I0807 18:39:18.299265   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0807 18:39:20.242074   12940 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0807 18:39:20.242074   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:39:20.243126   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0807 18:39:22.026163   12940 main.go:141] libmachine: [stdout =====>] : False
	
	I0807 18:39:22.026163   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:39:22.026163   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0807 18:39:23.553099   12940 main.go:141] libmachine: [stdout =====>] : True
	
	I0807 18:39:23.553612   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:39:23.553612   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0807 18:39:27.365238   12940 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0807 18:39:27.365628   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:39:27.368044   12940 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0807 18:39:27.812541   12940 main.go:141] libmachine: Creating SSH key...
	I0807 18:39:27.960062   12940 main.go:141] libmachine: Creating VM...
	I0807 18:39:27.960062   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0807 18:39:31.015626   12940 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0807 18:39:31.015626   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:39:31.015626   12940 main.go:141] libmachine: Using switch "Default Switch"
	I0807 18:39:31.015778   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0807 18:39:32.860744   12940 main.go:141] libmachine: [stdout =====>] : True
	
	I0807 18:39:32.860744   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:39:32.860877   12940 main.go:141] libmachine: Creating VHD
	I0807 18:39:32.860877   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-766300-m03\fixed.vhd' -SizeBytes 10MB -Fixed
	I0807 18:39:36.732406   12940 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-766300-m03\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 758DB308-813F-4953-BDDD-8289B54F244C
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0807 18:39:36.732512   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:39:36.732512   12940 main.go:141] libmachine: Writing magic tar header
	I0807 18:39:36.732612   12940 main.go:141] libmachine: Writing SSH key tar header
	I0807 18:39:36.743491   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-766300-m03\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-766300-m03\disk.vhd' -VHDType Dynamic -DeleteSource
	I0807 18:39:40.027193   12940 main.go:141] libmachine: [stdout =====>] : 
	I0807 18:39:40.028165   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:39:40.028165   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-766300-m03\disk.vhd' -SizeBytes 20000MB
	I0807 18:39:42.633400   12940 main.go:141] libmachine: [stdout =====>] : 
	I0807 18:39:42.634114   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:39:42.634218   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-766300-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-766300-m03' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0807 18:39:46.400711   12940 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-766300-m03 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0807 18:39:46.400711   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:39:46.401465   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-766300-m03 -DynamicMemoryEnabled $false
	I0807 18:39:48.729336   12940 main.go:141] libmachine: [stdout =====>] : 
	I0807 18:39:48.729336   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:39:48.730024   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-766300-m03 -Count 2
	I0807 18:39:50.976387   12940 main.go:141] libmachine: [stdout =====>] : 
	I0807 18:39:50.976479   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:39:50.976479   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-766300-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-766300-m03\boot2docker.iso'
	I0807 18:39:53.686717   12940 main.go:141] libmachine: [stdout =====>] : 
	I0807 18:39:53.687223   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:39:53.687223   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-766300-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-766300-m03\disk.vhd'
	I0807 18:39:56.427762   12940 main.go:141] libmachine: [stdout =====>] : 
	I0807 18:39:56.427762   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:39:56.427762   12940 main.go:141] libmachine: Starting VM...
	I0807 18:39:56.427762   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-766300-m03
	I0807 18:39:59.677372   12940 main.go:141] libmachine: [stdout =====>] : 
	I0807 18:39:59.677372   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:39:59.677372   12940 main.go:141] libmachine: Waiting for host to start...
	I0807 18:39:59.677372   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m03 ).state
	I0807 18:40:02.113855   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:40:02.114549   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:40:02.114608   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300-m03 ).networkadapters[0]).ipaddresses[0]
	I0807 18:40:04.729194   12940 main.go:141] libmachine: [stdout =====>] : 
	I0807 18:40:04.729194   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:40:05.740197   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m03 ).state
	I0807 18:40:08.098936   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:40:08.099334   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:40:08.099334   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300-m03 ).networkadapters[0]).ipaddresses[0]
	I0807 18:40:10.745011   12940 main.go:141] libmachine: [stdout =====>] : 
	I0807 18:40:10.745011   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:40:11.755073   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m03 ).state
	I0807 18:40:14.044294   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:40:14.044492   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:40:14.044492   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300-m03 ).networkadapters[0]).ipaddresses[0]
	I0807 18:40:16.645767   12940 main.go:141] libmachine: [stdout =====>] : 
	I0807 18:40:16.645767   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:40:17.654128   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m03 ).state
	I0807 18:40:19.981889   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:40:19.981889   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:40:19.981889   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300-m03 ).networkadapters[0]).ipaddresses[0]
	I0807 18:40:22.687744   12940 main.go:141] libmachine: [stdout =====>] : 
	I0807 18:40:22.688318   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:40:23.689224   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m03 ).state
	I0807 18:40:26.061254   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:40:26.061459   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:40:26.061632   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300-m03 ).networkadapters[0]).ipaddresses[0]
	I0807 18:40:28.734016   12940 main.go:141] libmachine: [stdout =====>] : 172.28.233.130
	
	I0807 18:40:28.734016   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:40:28.735101   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m03 ).state
	I0807 18:40:30.981416   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:40:30.981709   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:40:30.981709   12940 machine.go:94] provisionDockerMachine start ...
	I0807 18:40:30.981709   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m03 ).state
	I0807 18:40:33.265775   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:40:33.265775   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:40:33.266390   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300-m03 ).networkadapters[0]).ipaddresses[0]
	I0807 18:40:35.917231   12940 main.go:141] libmachine: [stdout =====>] : 172.28.233.130
	
	I0807 18:40:35.917231   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:40:35.923650   12940 main.go:141] libmachine: Using SSH client type: native
	I0807 18:40:35.924199   12940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.233.130 22 <nil> <nil>}
	I0807 18:40:35.924199   12940 main.go:141] libmachine: About to run SSH command:
	hostname
	I0807 18:40:36.047542   12940 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0807 18:40:36.047542   12940 buildroot.go:166] provisioning hostname "ha-766300-m03"
	I0807 18:40:36.047542   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m03 ).state
	I0807 18:40:38.283780   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:40:38.284504   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:40:38.284504   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300-m03 ).networkadapters[0]).ipaddresses[0]
	I0807 18:40:40.979089   12940 main.go:141] libmachine: [stdout =====>] : 172.28.233.130
	
	I0807 18:40:40.979339   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:40:40.985107   12940 main.go:141] libmachine: Using SSH client type: native
	I0807 18:40:40.985649   12940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.233.130 22 <nil> <nil>}
	I0807 18:40:40.985649   12940 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-766300-m03 && echo "ha-766300-m03" | sudo tee /etc/hostname
	I0807 18:40:41.146157   12940 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-766300-m03
	
	I0807 18:40:41.146264   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m03 ).state
	I0807 18:40:43.404385   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:40:43.404764   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:40:43.404837   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300-m03 ).networkadapters[0]).ipaddresses[0]
	I0807 18:40:46.071087   12940 main.go:141] libmachine: [stdout =====>] : 172.28.233.130
	
	I0807 18:40:46.071087   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:40:46.076990   12940 main.go:141] libmachine: Using SSH client type: native
	I0807 18:40:46.077372   12940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.233.130 22 <nil> <nil>}
	I0807 18:40:46.077912   12940 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-766300-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-766300-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-766300-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0807 18:40:46.221352   12940 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0807 18:40:46.221352   12940 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0807 18:40:46.221426   12940 buildroot.go:174] setting up certificates
	I0807 18:40:46.221426   12940 provision.go:84] configureAuth start
	I0807 18:40:46.221552   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m03 ).state
	I0807 18:40:48.450906   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:40:48.450906   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:40:48.451449   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300-m03 ).networkadapters[0]).ipaddresses[0]
	I0807 18:40:51.126252   12940 main.go:141] libmachine: [stdout =====>] : 172.28.233.130
	
	I0807 18:40:51.126252   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:40:51.127193   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m03 ).state
	I0807 18:40:53.346659   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:40:53.346659   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:40:53.346659   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300-m03 ).networkadapters[0]).ipaddresses[0]
	I0807 18:40:56.030723   12940 main.go:141] libmachine: [stdout =====>] : 172.28.233.130
	
	I0807 18:40:56.030723   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:40:56.030723   12940 provision.go:143] copyHostCerts
	I0807 18:40:56.031878   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0807 18:40:56.032236   12940 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0807 18:40:56.032336   12940 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0807 18:40:56.032697   12940 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0807 18:40:56.033587   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0807 18:40:56.033587   12940 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0807 18:40:56.033587   12940 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0807 18:40:56.034462   12940 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0807 18:40:56.035619   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0807 18:40:56.035807   12940 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0807 18:40:56.035807   12940 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0807 18:40:56.035807   12940 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0807 18:40:56.037361   12940 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-766300-m03 san=[127.0.0.1 172.28.233.130 ha-766300-m03 localhost minikube]
	I0807 18:40:56.304335   12940 provision.go:177] copyRemoteCerts
	I0807 18:40:56.317330   12940 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0807 18:40:56.317330   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m03 ).state
	I0807 18:40:58.590497   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:40:58.590497   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:40:58.590966   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300-m03 ).networkadapters[0]).ipaddresses[0]
	I0807 18:41:01.258076   12940 main.go:141] libmachine: [stdout =====>] : 172.28.233.130
	
	I0807 18:41:01.258076   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:41:01.258994   12940 sshutil.go:53] new ssh client: &{IP:172.28.233.130 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-766300-m03\id_rsa Username:docker}
	I0807 18:41:01.368132   12940 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.0505801s)
	I0807 18:41:01.368132   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0807 18:41:01.368751   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0807 18:41:01.416213   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0807 18:41:01.416514   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0807 18:41:01.464664   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0807 18:41:01.465643   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0807 18:41:01.514396   12940 provision.go:87] duration metric: took 15.2927756s to configureAuth
	I0807 18:41:01.514396   12940 buildroot.go:189] setting minikube options for container-runtime
	I0807 18:41:01.515102   12940 config.go:182] Loaded profile config "ha-766300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 18:41:01.515238   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m03 ).state
	I0807 18:41:03.726417   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:41:03.727058   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:41:03.727411   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300-m03 ).networkadapters[0]).ipaddresses[0]
	I0807 18:41:06.384761   12940 main.go:141] libmachine: [stdout =====>] : 172.28.233.130
	
	I0807 18:41:06.384761   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:41:06.391660   12940 main.go:141] libmachine: Using SSH client type: native
	I0807 18:41:06.392205   12940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.233.130 22 <nil> <nil>}
	I0807 18:41:06.392205   12940 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0807 18:41:06.511802   12940 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0807 18:41:06.511878   12940 buildroot.go:70] root file system type: tmpfs
	I0807 18:41:06.512223   12940 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0807 18:41:06.512282   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m03 ).state
	I0807 18:41:08.750415   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:41:08.750415   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:41:08.751096   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300-m03 ).networkadapters[0]).ipaddresses[0]
	I0807 18:41:11.408510   12940 main.go:141] libmachine: [stdout =====>] : 172.28.233.130
	
	I0807 18:41:11.408510   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:41:11.414515   12940 main.go:141] libmachine: Using SSH client type: native
	I0807 18:41:11.415201   12940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.233.130 22 <nil> <nil>}
	I0807 18:41:11.415201   12940 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.28.224.88"
	Environment="NO_PROXY=172.28.224.88,172.28.238.183"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0807 18:41:11.560232   12940 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.28.224.88
	Environment=NO_PROXY=172.28.224.88,172.28.238.183
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0807 18:41:11.560386   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m03 ).state
	I0807 18:41:13.816422   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:41:13.816990   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:41:13.817061   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300-m03 ).networkadapters[0]).ipaddresses[0]
	I0807 18:41:16.514507   12940 main.go:141] libmachine: [stdout =====>] : 172.28.233.130
	
	I0807 18:41:16.514507   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:41:16.521263   12940 main.go:141] libmachine: Using SSH client type: native
	I0807 18:41:16.521844   12940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.233.130 22 <nil> <nil>}
	I0807 18:41:16.521883   12940 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0807 18:41:18.837736   12940 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0807 18:41:18.837736   12940 machine.go:97] duration metric: took 47.8554186s to provisionDockerMachine
	I0807 18:41:18.837736   12940 client.go:171] duration metric: took 2m0.5383074s to LocalClient.Create
	I0807 18:41:18.837736   12940 start.go:167] duration metric: took 2m0.5383074s to libmachine.API.Create "ha-766300"
	I0807 18:41:18.837736   12940 start.go:293] postStartSetup for "ha-766300-m03" (driver="hyperv")
	I0807 18:41:18.837736   12940 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0807 18:41:18.851705   12940 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0807 18:41:18.851705   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m03 ).state
	I0807 18:41:21.070549   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:41:21.070593   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:41:21.070681   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300-m03 ).networkadapters[0]).ipaddresses[0]
	I0807 18:41:23.712527   12940 main.go:141] libmachine: [stdout =====>] : 172.28.233.130
	
	I0807 18:41:23.712527   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:41:23.712527   12940 sshutil.go:53] new ssh client: &{IP:172.28.233.130 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-766300-m03\id_rsa Username:docker}
	I0807 18:41:23.812678   12940 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9606641s)
	I0807 18:41:23.824635   12940 ssh_runner.go:195] Run: cat /etc/os-release
	I0807 18:41:23.831791   12940 info.go:137] Remote host: Buildroot 2023.02.9
	I0807 18:41:23.831866   12940 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0807 18:41:23.832339   12940 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0807 18:41:23.833499   12940 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96602.pem -> 96602.pem in /etc/ssl/certs
	I0807 18:41:23.833667   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96602.pem -> /etc/ssl/certs/96602.pem
	I0807 18:41:23.846071   12940 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0807 18:41:23.863341   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96602.pem --> /etc/ssl/certs/96602.pem (1708 bytes)
	I0807 18:41:23.910437   12940 start.go:296] duration metric: took 5.0726367s for postStartSetup
	I0807 18:41:23.913180   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m03 ).state
	I0807 18:41:26.140068   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:41:26.140275   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:41:26.140275   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300-m03 ).networkadapters[0]).ipaddresses[0]
	I0807 18:41:28.779229   12940 main.go:141] libmachine: [stdout =====>] : 172.28.233.130
	
	I0807 18:41:28.779229   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:41:28.779507   12940 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\config.json ...
	I0807 18:41:28.782142   12940 start.go:128] duration metric: took 2m10.486404s to createHost
	I0807 18:41:28.782142   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m03 ).state
	I0807 18:41:30.990595   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:41:30.990595   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:41:30.991298   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300-m03 ).networkadapters[0]).ipaddresses[0]
	I0807 18:41:33.628034   12940 main.go:141] libmachine: [stdout =====>] : 172.28.233.130
	
	I0807 18:41:33.628165   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:41:33.636700   12940 main.go:141] libmachine: Using SSH client type: native
	I0807 18:41:33.637633   12940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.233.130 22 <nil> <nil>}
	I0807 18:41:33.637633   12940 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0807 18:41:33.758348   12940 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723056093.771913301
	
	I0807 18:41:33.758348   12940 fix.go:216] guest clock: 1723056093.771913301
	I0807 18:41:33.758348   12940 fix.go:229] Guest: 2024-08-07 18:41:33.771913301 +0000 UTC Remote: 2024-08-07 18:41:28.7821423 +0000 UTC m=+597.777960501 (delta=4.989771001s)
	I0807 18:41:33.758348   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m03 ).state
	I0807 18:41:35.964326   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:41:35.964326   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:41:35.964855   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300-m03 ).networkadapters[0]).ipaddresses[0]
	I0807 18:41:38.598224   12940 main.go:141] libmachine: [stdout =====>] : 172.28.233.130
	
	I0807 18:41:38.598815   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:41:38.604663   12940 main.go:141] libmachine: Using SSH client type: native
	I0807 18:41:38.604824   12940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.233.130 22 <nil> <nil>}
	I0807 18:41:38.604824   12940 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1723056093
	I0807 18:41:38.738848   12940 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Aug  7 18:41:33 UTC 2024
	
	I0807 18:41:38.738888   12940 fix.go:236] clock set: Wed Aug  7 18:41:33 UTC 2024
	 (err=<nil>)
	I0807 18:41:38.738888   12940 start.go:83] releasing machines lock for "ha-766300-m03", held for 2m20.4430236s
	I0807 18:41:38.738960   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m03 ).state
	I0807 18:41:40.962560   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:41:40.963256   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:41:40.963256   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300-m03 ).networkadapters[0]).ipaddresses[0]
	I0807 18:41:43.565131   12940 main.go:141] libmachine: [stdout =====>] : 172.28.233.130
	
	I0807 18:41:43.565364   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:41:43.569547   12940 out.go:177] * Found network options:
	I0807 18:41:43.572411   12940 out.go:177]   - NO_PROXY=172.28.224.88,172.28.238.183
	W0807 18:41:43.574907   12940 proxy.go:119] fail to check proxy env: Error ip not in block
	W0807 18:41:43.574907   12940 proxy.go:119] fail to check proxy env: Error ip not in block
	I0807 18:41:43.577946   12940 out.go:177]   - NO_PROXY=172.28.224.88,172.28.238.183
	W0807 18:41:43.582494   12940 proxy.go:119] fail to check proxy env: Error ip not in block
	W0807 18:41:43.582494   12940 proxy.go:119] fail to check proxy env: Error ip not in block
	W0807 18:41:43.583517   12940 proxy.go:119] fail to check proxy env: Error ip not in block
	W0807 18:41:43.583517   12940 proxy.go:119] fail to check proxy env: Error ip not in block
	I0807 18:41:43.586175   12940 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0807 18:41:43.586175   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m03 ).state
	I0807 18:41:43.596180   12940 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0807 18:41:43.596180   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300-m03 ).state
	I0807 18:41:45.909854   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:41:45.909854   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:41:45.910452   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300-m03 ).networkadapters[0]).ipaddresses[0]
	I0807 18:41:45.931221   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:41:45.931221   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:41:45.932096   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300-m03 ).networkadapters[0]).ipaddresses[0]
	I0807 18:41:48.749727   12940 main.go:141] libmachine: [stdout =====>] : 172.28.233.130
	
	I0807 18:41:48.749727   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:41:48.750540   12940 sshutil.go:53] new ssh client: &{IP:172.28.233.130 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-766300-m03\id_rsa Username:docker}
	I0807 18:41:48.772477   12940 main.go:141] libmachine: [stdout =====>] : 172.28.233.130
	
	I0807 18:41:48.772477   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:41:48.772477   12940 sshutil.go:53] new ssh client: &{IP:172.28.233.130 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-766300-m03\id_rsa Username:docker}
	I0807 18:41:48.846070   12940 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.2598278s)
	W0807 18:41:48.846183   12940 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0807 18:41:48.866316   12940 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.2700695s)
	W0807 18:41:48.866316   12940 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0807 18:41:48.878215   12940 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0807 18:41:48.908393   12940 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0807 18:41:48.908470   12940 start.go:495] detecting cgroup driver to use...
	I0807 18:41:48.908702   12940 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0807 18:41:48.959132   12940 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	W0807 18:41:48.960131   12940 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0807 18:41:48.960131   12940 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0807 18:41:48.991939   12940 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0807 18:41:49.011917   12940 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0807 18:41:49.022903   12940 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0807 18:41:49.056681   12940 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0807 18:41:49.092850   12940 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0807 18:41:49.126506   12940 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0807 18:41:49.161485   12940 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0807 18:41:49.198163   12940 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0807 18:41:49.229702   12940 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0807 18:41:49.260690   12940 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0807 18:41:49.290775   12940 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0807 18:41:49.320887   12940 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0807 18:41:49.349897   12940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 18:41:49.553530   12940 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0807 18:41:49.588612   12940 start.go:495] detecting cgroup driver to use...
	I0807 18:41:49.601299   12940 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0807 18:41:49.637151   12940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0807 18:41:49.667148   12940 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0807 18:41:49.714774   12940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0807 18:41:49.752437   12940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0807 18:41:49.788357   12940 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0807 18:41:49.851923   12940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0807 18:41:49.879736   12940 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0807 18:41:49.927930   12940 ssh_runner.go:195] Run: which cri-dockerd
	I0807 18:41:49.946230   12940 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0807 18:41:49.965049   12940 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0807 18:41:50.009091   12940 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0807 18:41:50.219111   12940 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0807 18:41:50.424239   12940 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0807 18:41:50.424320   12940 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0807 18:41:50.469832   12940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 18:41:50.667299   12940 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0807 18:41:53.271811   12940 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6044788s)
	I0807 18:41:53.284526   12940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0807 18:41:53.323554   12940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0807 18:41:53.357557   12940 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0807 18:41:53.569550   12940 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0807 18:41:53.780533   12940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 18:41:53.976111   12940 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0807 18:41:54.020760   12940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0807 18:41:54.063117   12940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 18:41:54.279906   12940 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0807 18:41:54.397317   12940 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0807 18:41:54.409022   12940 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0807 18:41:54.418510   12940 start.go:563] Will wait 60s for crictl version
	I0807 18:41:54.431124   12940 ssh_runner.go:195] Run: which crictl
	I0807 18:41:54.448098   12940 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0807 18:41:54.500125   12940 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.1
	RuntimeApiVersion:  v1
	I0807 18:41:54.509857   12940 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0807 18:41:54.552564   12940 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0807 18:41:54.586599   12940 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...
	I0807 18:41:54.589564   12940 out.go:177]   - env NO_PROXY=172.28.224.88
	I0807 18:41:54.592573   12940 out.go:177]   - env NO_PROXY=172.28.224.88,172.28.238.183
	I0807 18:41:54.594565   12940 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0807 18:41:54.598563   12940 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0807 18:41:54.598563   12940 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0807 18:41:54.598563   12940 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0807 18:41:54.598563   12940 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:f6:3a:6a Flags:up|broadcast|multicast|running}
	I0807 18:41:54.601606   12940 ip.go:210] interface addr: fe80::e7eb:b592:d388:ff99/64
	I0807 18:41:54.601606   12940 ip.go:210] interface addr: 172.28.224.1/20
	I0807 18:41:54.613595   12940 ssh_runner.go:195] Run: grep 172.28.224.1	host.minikube.internal$ /etc/hosts
	I0807 18:41:54.619564   12940 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.28.224.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0807 18:41:54.648751   12940 mustload.go:65] Loading cluster: ha-766300
	I0807 18:41:54.649779   12940 config.go:182] Loaded profile config "ha-766300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 18:41:54.650760   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300 ).state
	I0807 18:41:56.877675   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:41:56.878425   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:41:56.878425   12940 host.go:66] Checking if "ha-766300" exists ...
	I0807 18:41:56.879052   12940 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300 for IP: 172.28.233.130
	I0807 18:41:56.879111   12940 certs.go:194] generating shared ca certs ...
	I0807 18:41:56.879111   12940 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 18:41:56.879643   12940 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0807 18:41:56.879839   12940 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0807 18:41:56.879839   12940 certs.go:256] generating profile certs ...
	I0807 18:41:56.881110   12940 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\client.key
	I0807 18:41:56.881352   12940 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\apiserver.key.89c951d6
	I0807 18:41:56.881503   12940 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\apiserver.crt.89c951d6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.28.224.88 172.28.238.183 172.28.233.130 172.28.239.254]
	I0807 18:41:57.100497   12940 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\apiserver.crt.89c951d6 ...
	I0807 18:41:57.100497   12940 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\apiserver.crt.89c951d6: {Name:mk78c55a8688360f78348ea745a48b0e73bc659e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 18:41:57.102066   12940 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\apiserver.key.89c951d6 ...
	I0807 18:41:57.102066   12940 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\apiserver.key.89c951d6: {Name:mk8999dda82f8a430006c9bcf70b2406d4ab194a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 18:41:57.102613   12940 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\apiserver.crt.89c951d6 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\apiserver.crt
	I0807 18:41:57.117018   12940 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\apiserver.key.89c951d6 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\apiserver.key
	I0807 18:41:57.119448   12940 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\proxy-client.key
	I0807 18:41:57.119568   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0807 18:41:57.119740   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0807 18:41:57.119926   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0807 18:41:57.120193   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0807 18:41:57.120193   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0807 18:41:57.120193   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0807 18:41:57.120193   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0807 18:41:57.120846   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0807 18:41:57.121424   12940 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9660.pem (1338 bytes)
	W0807 18:41:57.121615   12940 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9660_empty.pem, impossibly tiny 0 bytes
	I0807 18:41:57.121838   12940 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0807 18:41:57.122147   12940 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0807 18:41:57.122147   12940 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0807 18:41:57.122899   12940 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0807 18:41:57.123223   12940 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96602.pem (1708 bytes)
	I0807 18:41:57.123223   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96602.pem -> /usr/share/ca-certificates/96602.pem
	I0807 18:41:57.123757   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0807 18:41:57.123937   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9660.pem -> /usr/share/ca-certificates/9660.pem
	I0807 18:41:57.124106   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300 ).state
	I0807 18:41:59.379705   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:41:59.380670   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:41:59.380714   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300 ).networkadapters[0]).ipaddresses[0]
	I0807 18:42:02.111754   12940 main.go:141] libmachine: [stdout =====>] : 172.28.224.88
	
	I0807 18:42:02.111754   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:42:02.112818   12940 sshutil.go:53] new ssh client: &{IP:172.28.224.88 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-766300\id_rsa Username:docker}
	I0807 18:42:02.215971   12940 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0807 18:42:02.223416   12940 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0807 18:42:02.257708   12940 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0807 18:42:02.267969   12940 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0807 18:42:02.303615   12940 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0807 18:42:02.311065   12940 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0807 18:42:02.344282   12940 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0807 18:42:02.351203   12940 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0807 18:42:02.385025   12940 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0807 18:42:02.392411   12940 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0807 18:42:02.426401   12940 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0807 18:42:02.433489   12940 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0807 18:42:02.456579   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0807 18:42:02.507958   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0807 18:42:02.557479   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0807 18:42:02.607728   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0807 18:42:02.655739   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0807 18:42:02.703156   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0807 18:42:02.750995   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0807 18:42:02.799682   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-766300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0807 18:42:02.849651   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96602.pem --> /usr/share/ca-certificates/96602.pem (1708 bytes)
	I0807 18:42:02.899640   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0807 18:42:02.952149   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9660.pem --> /usr/share/ca-certificates/9660.pem (1338 bytes)
	I0807 18:42:03.000639   12940 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0807 18:42:03.034048   12940 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0807 18:42:03.067576   12940 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0807 18:42:03.101843   12940 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0807 18:42:03.136591   12940 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0807 18:42:03.169419   12940 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0807 18:42:03.202201   12940 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0807 18:42:03.255497   12940 ssh_runner.go:195] Run: openssl version
	I0807 18:42:03.276228   12940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9660.pem && ln -fs /usr/share/ca-certificates/9660.pem /etc/ssl/certs/9660.pem"
	I0807 18:42:03.309250   12940 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9660.pem
	I0807 18:42:03.316562   12940 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  7 17:50 /usr/share/ca-certificates/9660.pem
	I0807 18:42:03.328422   12940 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9660.pem
	I0807 18:42:03.350679   12940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9660.pem /etc/ssl/certs/51391683.0"
	I0807 18:42:03.383321   12940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/96602.pem && ln -fs /usr/share/ca-certificates/96602.pem /etc/ssl/certs/96602.pem"
	I0807 18:42:03.415323   12940 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/96602.pem
	I0807 18:42:03.422342   12940 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  7 17:50 /usr/share/ca-certificates/96602.pem
	I0807 18:42:03.434519   12940 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/96602.pem
	I0807 18:42:03.457370   12940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/96602.pem /etc/ssl/certs/3ec20f2e.0"
	I0807 18:42:03.491621   12940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0807 18:42:03.537572   12940 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0807 18:42:03.545217   12940 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  7 17:33 /usr/share/ca-certificates/minikubeCA.pem
	I0807 18:42:03.558124   12940 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0807 18:42:03.579280   12940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0807 18:42:03.609316   12940 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0807 18:42:03.617695   12940 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0807 18:42:03.617987   12940 kubeadm.go:934] updating node {m03 172.28.233.130 8443 v1.30.3 docker true true} ...
	I0807 18:42:03.618150   12940 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-766300-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.28.233.130
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-766300 Namespace:default APIServerHAVIP:172.28.239.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0807 18:42:03.618235   12940 kube-vip.go:115] generating kube-vip config ...
	I0807 18:42:03.631186   12940 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0807 18:42:03.659222   12940 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0807 18:42:03.659963   12940 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.28.239.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0807 18:42:03.672174   12940 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0807 18:42:03.688196   12940 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0807 18:42:03.700204   12940 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0807 18:42:03.719724   12940 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256
	I0807 18:42:03.719724   12940 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256
	I0807 18:42:03.719724   12940 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0807 18:42:03.720486   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0807 18:42:03.720486   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0807 18:42:03.735682   12940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0807 18:42:03.736696   12940 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0807 18:42:03.737691   12940 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0807 18:42:03.759833   12940 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0807 18:42:03.759833   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0807 18:42:03.759833   12940 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0807 18:42:03.759833   12940 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0807 18:42:03.759833   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0807 18:42:03.776560   12940 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0807 18:42:03.837210   12940 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0807 18:42:03.837210   12940 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0807 18:42:05.155242   12940 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0807 18:42:05.173243   12940 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0807 18:42:05.210248   12940 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0807 18:42:05.247886   12940 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0807 18:42:05.293274   12940 ssh_runner.go:195] Run: grep 172.28.239.254	control-plane.minikube.internal$ /etc/hosts
	I0807 18:42:05.304277   12940 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.28.239.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0807 18:42:05.341471   12940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 18:42:05.548457   12940 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0807 18:42:05.579435   12940 host.go:66] Checking if "ha-766300" exists ...
	I0807 18:42:05.580898   12940 start.go:317] joinCluster: &{Name:ha-766300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-766300 Namespace:default APIServerHAVIP:172.28.239.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.224.88 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.238.183 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:172.28.233.130 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 18:42:05.580898   12940 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0807 18:42:05.581465   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-766300 ).state
	I0807 18:42:07.840234   12940 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 18:42:07.841260   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:42:07.841587   12940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-766300 ).networkadapters[0]).ipaddresses[0]
	I0807 18:42:10.567955   12940 main.go:141] libmachine: [stdout =====>] : 172.28.224.88
	
	I0807 18:42:10.567955   12940 main.go:141] libmachine: [stderr =====>] : 
	I0807 18:42:10.568301   12940 sshutil.go:53] new ssh client: &{IP:172.28.224.88 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-766300\id_rsa Username:docker}
	I0807 18:42:10.791697   12940 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0": (5.2106672s)
	I0807 18:42:10.791764   12940 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:172.28.233.130 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0807 18:42:10.791879   12940 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zvef20.grw9eubfzckouhp2 --discovery-token-ca-cert-hash sha256:54111b4769238fc338d0511ba57c177f732a6d68c0b9cd0aa8cacbbf3c79643b --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-766300-m03 --control-plane --apiserver-advertise-address=172.28.233.130 --apiserver-bind-port=8443"
	I0807 18:42:57.063467   12940 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zvef20.grw9eubfzckouhp2 --discovery-token-ca-cert-hash sha256:54111b4769238fc338d0511ba57c177f732a6d68c0b9cd0aa8cacbbf3c79643b --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-766300-m03 --control-plane --apiserver-advertise-address=172.28.233.130 --apiserver-bind-port=8443": (46.2710007s)
	I0807 18:42:57.063467   12940 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0807 18:42:58.127621   12940 ssh_runner.go:235] Completed: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet": (1.0641404s)
	I0807 18:42:58.142729   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-766300-m03 minikube.k8s.io/updated_at=2024_08_07T18_42_58_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e minikube.k8s.io/name=ha-766300 minikube.k8s.io/primary=false
	I0807 18:42:58.349806   12940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-766300-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0807 18:42:58.520326   12940 start.go:319] duration metric: took 52.9387557s to joinCluster
	I0807 18:42:58.520546   12940 start.go:235] Will wait 6m0s for node &{Name:m03 IP:172.28.233.130 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0807 18:42:58.521596   12940 config.go:182] Loaded profile config "ha-766300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 18:42:58.523624   12940 out.go:177] * Verifying Kubernetes components...
	I0807 18:42:58.539832   12940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 18:42:58.930394   12940 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0807 18:42:58.959404   12940 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0807 18:42:58.960394   12940 kapi.go:59] client config for ha-766300: &rest.Config{Host:"https://172.28.239.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-766300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-766300\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1da64c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0807 18:42:58.960394   12940 kubeadm.go:483] Overriding stale ClientConfig host https://172.28.239.254:8443 with https://172.28.224.88:8443
	I0807 18:42:58.961398   12940 node_ready.go:35] waiting up to 6m0s for node "ha-766300-m03" to be "Ready" ...
	I0807 18:42:58.961398   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:42:58.961398   12940 round_trippers.go:469] Request Headers:
	I0807 18:42:58.961398   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:42:58.961398   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:42:58.975390   12940 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0807 18:42:59.471804   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:42:59.471804   12940 round_trippers.go:469] Request Headers:
	I0807 18:42:59.471804   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:42:59.471804   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:42:59.477960   12940 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0807 18:42:59.976514   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:42:59.976514   12940 round_trippers.go:469] Request Headers:
	I0807 18:42:59.976514   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:42:59.976598   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:42:59.982033   12940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 18:43:00.469212   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:00.469212   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:00.469212   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:00.469212   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:00.486553   12940 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0807 18:43:00.974114   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:00.974114   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:00.974114   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:00.974114   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:00.979122   12940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 18:43:00.980473   12940 node_ready.go:53] node "ha-766300-m03" has status "Ready":"False"
	I0807 18:43:01.465168   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:01.465482   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:01.465482   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:01.465517   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:01.472239   12940 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0807 18:43:01.970651   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:01.970781   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:01.970781   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:01.970781   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:01.975827   12940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 18:43:02.464253   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:02.464324   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:02.464324   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:02.464324   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:02.479941   12940 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0807 18:43:02.967010   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:02.967066   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:02.967066   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:02.967066   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:02.970871   12940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:43:03.474836   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:03.474836   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:03.474836   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:03.474836   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:03.485825   12940 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0807 18:43:03.488511   12940 node_ready.go:53] node "ha-766300-m03" has status "Ready":"False"
	I0807 18:43:03.964325   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:03.964409   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:03.964469   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:03.964469   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:03.969711   12940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 18:43:04.465208   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:04.465208   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:04.465208   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:04.465208   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:04.468817   12940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:43:04.968575   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:04.968741   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:04.968741   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:04.968741   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:04.973142   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:43:05.468065   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:05.468065   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:05.468065   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:05.468065   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:05.473014   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:43:05.972479   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:05.972705   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:05.972705   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:05.972705   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:05.977324   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:43:05.979245   12940 node_ready.go:53] node "ha-766300-m03" has status "Ready":"False"
	I0807 18:43:06.473589   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:06.474363   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:06.474972   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:06.474972   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:06.480343   12940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 18:43:06.963922   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:06.964017   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:06.964017   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:06.964017   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:06.969382   12940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 18:43:07.475433   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:07.475433   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:07.475500   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:07.475500   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:07.483947   12940 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0807 18:43:07.962585   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:07.962665   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:07.962665   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:07.962665   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:07.966581   12940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:43:08.464596   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:08.464596   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:08.464596   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:08.464596   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:08.472255   12940 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0807 18:43:08.473958   12940 node_ready.go:53] node "ha-766300-m03" has status "Ready":"False"
	I0807 18:43:08.967167   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:08.967244   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:08.967244   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:08.967244   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:08.972982   12940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 18:43:09.467015   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:09.467015   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:09.467186   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:09.467186   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:09.474622   12940 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0807 18:43:09.962030   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:09.962105   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:09.962105   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:09.962105   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:09.967555   12940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 18:43:10.474381   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:10.474458   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:10.474458   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:10.474458   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:10.486878   12940 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0807 18:43:10.487430   12940 node_ready.go:53] node "ha-766300-m03" has status "Ready":"False"
	I0807 18:43:10.971561   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:10.971561   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:10.971561   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:10.971561   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:10.977374   12940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 18:43:11.469440   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:11.469440   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:11.469440   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:11.469440   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:11.476071   12940 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0807 18:43:11.970851   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:11.970926   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:11.970926   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:11.970926   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:11.976504   12940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 18:43:12.470890   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:12.470956   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:12.470956   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:12.470956   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:12.477755   12940 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0807 18:43:12.973263   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:12.973495   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:12.973495   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:12.973495   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:12.979744   12940 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0807 18:43:12.980367   12940 node_ready.go:53] node "ha-766300-m03" has status "Ready":"False"
	I0807 18:43:13.465744   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:13.465744   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:13.465744   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:13.465744   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:13.471908   12940 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0807 18:43:13.966672   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:13.966741   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:13.966741   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:13.966741   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:13.971370   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:43:14.470881   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:14.470881   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:14.470881   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:14.470881   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:14.478623   12940 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0807 18:43:14.972870   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:14.972870   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:14.972870   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:14.972870   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:14.977518   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:43:15.473915   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:15.474124   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:15.474124   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:15.474124   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:15.479622   12940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 18:43:15.480010   12940 node_ready.go:53] node "ha-766300-m03" has status "Ready":"False"
	I0807 18:43:15.977679   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:15.977790   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:15.977790   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:15.977790   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:15.984385   12940 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0807 18:43:16.475709   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:16.475823   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:16.475823   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:16.475823   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:16.480231   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:43:16.976677   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:16.976677   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:16.976677   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:16.976677   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:16.981339   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:43:17.474910   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:17.475038   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:17.475038   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:17.475038   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:17.479865   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:43:17.480934   12940 node_ready.go:53] node "ha-766300-m03" has status "Ready":"False"
	I0807 18:43:17.977255   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:17.977255   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:17.977255   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:17.977255   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:17.982853   12940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 18:43:18.464009   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:18.464009   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:18.464009   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:18.464009   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:18.469360   12940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 18:43:18.962855   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:18.962855   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:18.962855   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:18.962855   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:18.967456   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:43:19.463909   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:19.464432   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:19.464432   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:19.464432   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:19.477247   12940 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0807 18:43:19.478250   12940 node_ready.go:49] node "ha-766300-m03" has status "Ready":"True"
	I0807 18:43:19.478250   12940 node_ready.go:38] duration metric: took 20.5165916s for node "ha-766300-m03" to be "Ready" ...
	I0807 18:43:19.478250   12940 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0807 18:43:19.478250   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods
	I0807 18:43:19.478250   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:19.478250   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:19.478250   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:19.489269   12940 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0807 18:43:19.498259   12940 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-9tjv6" in "kube-system" namespace to be "Ready" ...
	I0807 18:43:19.499247   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-9tjv6
	I0807 18:43:19.499247   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:19.499247   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:19.499247   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:19.503243   12940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:43:19.504793   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300
	I0807 18:43:19.504793   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:19.504793   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:19.504793   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:19.509781   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:43:19.510689   12940 pod_ready.go:92] pod "coredns-7db6d8ff4d-9tjv6" in "kube-system" namespace has status "Ready":"True"
	I0807 18:43:19.510689   12940 pod_ready.go:81] duration metric: took 11.4418ms for pod "coredns-7db6d8ff4d-9tjv6" in "kube-system" namespace to be "Ready" ...
	I0807 18:43:19.510689   12940 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-fqjwg" in "kube-system" namespace to be "Ready" ...
	I0807 18:43:19.510689   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fqjwg
	I0807 18:43:19.510689   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:19.510689   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:19.510689   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:19.515262   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:43:19.516718   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300
	I0807 18:43:19.516718   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:19.516718   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:19.516718   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:19.520310   12940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:43:19.521331   12940 pod_ready.go:92] pod "coredns-7db6d8ff4d-fqjwg" in "kube-system" namespace has status "Ready":"True"
	I0807 18:43:19.521331   12940 pod_ready.go:81] duration metric: took 10.6419ms for pod "coredns-7db6d8ff4d-fqjwg" in "kube-system" namespace to be "Ready" ...
	I0807 18:43:19.521331   12940 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-766300" in "kube-system" namespace to be "Ready" ...
	I0807 18:43:19.521331   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/etcd-ha-766300
	I0807 18:43:19.521331   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:19.521331   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:19.521331   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:19.541221   12940 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0807 18:43:19.542172   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300
	I0807 18:43:19.542237   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:19.542237   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:19.542237   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:19.546428   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:43:19.547398   12940 pod_ready.go:92] pod "etcd-ha-766300" in "kube-system" namespace has status "Ready":"True"
	I0807 18:43:19.547456   12940 pod_ready.go:81] duration metric: took 26.1251ms for pod "etcd-ha-766300" in "kube-system" namespace to be "Ready" ...
	I0807 18:43:19.547456   12940 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-766300-m02" in "kube-system" namespace to be "Ready" ...
	I0807 18:43:19.547522   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/etcd-ha-766300-m02
	I0807 18:43:19.547522   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:19.547522   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:19.547522   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:19.551433   12940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:43:19.551433   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:43:19.551433   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:19.551433   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:19.551433   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:19.554395   12940 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 18:43:19.554395   12940 pod_ready.go:92] pod "etcd-ha-766300-m02" in "kube-system" namespace has status "Ready":"True"
	I0807 18:43:19.554395   12940 pod_ready.go:81] duration metric: took 6.9386ms for pod "etcd-ha-766300-m02" in "kube-system" namespace to be "Ready" ...
	I0807 18:43:19.554395   12940 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-766300-m03" in "kube-system" namespace to be "Ready" ...
	I0807 18:43:19.670818   12940 request.go:629] Waited for 116.3357ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/etcd-ha-766300-m03
	I0807 18:43:19.670953   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/etcd-ha-766300-m03
	I0807 18:43:19.670953   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:19.670953   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:19.670953   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:19.675593   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:43:19.878264   12940 request.go:629] Waited for 201.643ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:19.878465   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:19.878465   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:19.878549   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:19.878549   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:19.881872   12940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:43:19.883521   12940 pod_ready.go:92] pod "etcd-ha-766300-m03" in "kube-system" namespace has status "Ready":"True"
	I0807 18:43:19.883521   12940 pod_ready.go:81] duration metric: took 329.1223ms for pod "etcd-ha-766300-m03" in "kube-system" namespace to be "Ready" ...
	I0807 18:43:19.883620   12940 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-766300" in "kube-system" namespace to be "Ready" ...
	I0807 18:43:20.066588   12940 request.go:629] Waited for 182.5886ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-766300
	I0807 18:43:20.066667   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-766300
	I0807 18:43:20.066749   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:20.066749   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:20.066749   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:20.071053   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:43:20.269590   12940 request.go:629] Waited for 197.0445ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/nodes/ha-766300
	I0807 18:43:20.269590   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300
	I0807 18:43:20.269590   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:20.269590   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:20.269590   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:20.274427   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:43:20.274984   12940 pod_ready.go:92] pod "kube-apiserver-ha-766300" in "kube-system" namespace has status "Ready":"True"
	I0807 18:43:20.275517   12940 pod_ready.go:81] duration metric: took 391.8924ms for pod "kube-apiserver-ha-766300" in "kube-system" namespace to be "Ready" ...
	I0807 18:43:20.275599   12940 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-766300-m02" in "kube-system" namespace to be "Ready" ...
	I0807 18:43:20.471784   12940 request.go:629] Waited for 196.1829ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-766300-m02
	I0807 18:43:20.472010   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-766300-m02
	I0807 18:43:20.472010   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:20.472010   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:20.472010   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:20.477953   12940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 18:43:20.675312   12940 request.go:629] Waited for 196.3356ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:43:20.675455   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:43:20.675455   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:20.675455   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:20.675455   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:20.685109   12940 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0807 18:43:20.686302   12940 pod_ready.go:92] pod "kube-apiserver-ha-766300-m02" in "kube-system" namespace has status "Ready":"True"
	I0807 18:43:20.686302   12940 pod_ready.go:81] duration metric: took 410.6979ms for pod "kube-apiserver-ha-766300-m02" in "kube-system" namespace to be "Ready" ...
	I0807 18:43:20.686302   12940 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-766300-m03" in "kube-system" namespace to be "Ready" ...
	I0807 18:43:20.878746   12940 request.go:629] Waited for 192.3254ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-766300-m03
	I0807 18:43:20.878746   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-766300-m03
	I0807 18:43:20.878746   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:20.878746   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:20.878746   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:20.883612   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:43:21.069744   12940 request.go:629] Waited for 184.4783ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:21.069744   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:21.069744   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:21.069744   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:21.069744   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:21.076477   12940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 18:43:21.077009   12940 pod_ready.go:92] pod "kube-apiserver-ha-766300-m03" in "kube-system" namespace has status "Ready":"True"
	I0807 18:43:21.077009   12940 pod_ready.go:81] duration metric: took 390.7018ms for pod "kube-apiserver-ha-766300-m03" in "kube-system" namespace to be "Ready" ...
	I0807 18:43:21.077009   12940 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-766300" in "kube-system" namespace to be "Ready" ...
	I0807 18:43:21.273746   12940 request.go:629] Waited for 196.7021ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-766300
	I0807 18:43:21.273804   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-766300
	I0807 18:43:21.273929   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:21.273929   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:21.273929   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:21.279497   12940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 18:43:21.477806   12940 request.go:629] Waited for 196.9576ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/nodes/ha-766300
	I0807 18:43:21.477999   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300
	I0807 18:43:21.477999   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:21.478092   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:21.478154   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:21.483071   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:43:21.484133   12940 pod_ready.go:92] pod "kube-controller-manager-ha-766300" in "kube-system" namespace has status "Ready":"True"
	I0807 18:43:21.484191   12940 pod_ready.go:81] duration metric: took 407.1771ms for pod "kube-controller-manager-ha-766300" in "kube-system" namespace to be "Ready" ...
	I0807 18:43:21.484191   12940 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-766300-m02" in "kube-system" namespace to be "Ready" ...
	I0807 18:43:21.665332   12940 request.go:629] Waited for 180.9277ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-766300-m02
	I0807 18:43:21.665450   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-766300-m02
	I0807 18:43:21.665450   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:21.665450   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:21.665450   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:21.676054   12940 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0807 18:43:21.870230   12940 request.go:629] Waited for 192.1494ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:43:21.870419   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:43:21.870529   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:21.870529   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:21.870529   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:21.876093   12940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 18:43:21.877076   12940 pod_ready.go:92] pod "kube-controller-manager-ha-766300-m02" in "kube-system" namespace has status "Ready":"True"
	I0807 18:43:21.877076   12940 pod_ready.go:81] duration metric: took 392.8803ms for pod "kube-controller-manager-ha-766300-m02" in "kube-system" namespace to be "Ready" ...
	I0807 18:43:21.877076   12940 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-766300-m03" in "kube-system" namespace to be "Ready" ...
	I0807 18:43:22.073598   12940 request.go:629] Waited for 196.1302ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-766300-m03
	I0807 18:43:22.073805   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-766300-m03
	I0807 18:43:22.073964   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:22.073964   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:22.073964   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:22.082776   12940 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0807 18:43:22.277313   12940 request.go:629] Waited for 193.2701ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:22.277493   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:22.277542   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:22.277542   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:22.277542   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:22.285315   12940 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0807 18:43:22.286310   12940 pod_ready.go:92] pod "kube-controller-manager-ha-766300-m03" in "kube-system" namespace has status "Ready":"True"
	I0807 18:43:22.286344   12940 pod_ready.go:81] duration metric: took 409.2629ms for pod "kube-controller-manager-ha-766300-m03" in "kube-system" namespace to be "Ready" ...
	I0807 18:43:22.286344   12940 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8v6vm" in "kube-system" namespace to be "Ready" ...
	I0807 18:43:22.464754   12940 request.go:629] Waited for 178.3054ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8v6vm
	I0807 18:43:22.465005   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8v6vm
	I0807 18:43:22.465005   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:22.465096   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:22.465096   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:22.469502   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:43:22.668243   12940 request.go:629] Waited for 196.6128ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:43:22.668447   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:43:22.668447   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:22.668447   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:22.668447   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:22.682478   12940 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0807 18:43:22.683974   12940 pod_ready.go:92] pod "kube-proxy-8v6vm" in "kube-system" namespace has status "Ready":"True"
	I0807 18:43:22.683974   12940 pod_ready.go:81] duration metric: took 397.6242ms for pod "kube-proxy-8v6vm" in "kube-system" namespace to be "Ready" ...
	I0807 18:43:22.683974   12940 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-d6ckx" in "kube-system" namespace to be "Ready" ...
	I0807 18:43:22.870634   12940 request.go:629] Waited for 186.3767ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d6ckx
	I0807 18:43:22.870918   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d6ckx
	I0807 18:43:22.871009   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:22.871009   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:22.871009   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:22.876123   12940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 18:43:23.074586   12940 request.go:629] Waited for 196.3446ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/nodes/ha-766300
	I0807 18:43:23.074811   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300
	I0807 18:43:23.074811   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:23.074894   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:23.074894   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:23.078401   12940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:43:23.079549   12940 pod_ready.go:92] pod "kube-proxy-d6ckx" in "kube-system" namespace has status "Ready":"True"
	I0807 18:43:23.080083   12940 pod_ready.go:81] duration metric: took 396.1045ms for pod "kube-proxy-d6ckx" in "kube-system" namespace to be "Ready" ...
	I0807 18:43:23.080083   12940 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mlf2g" in "kube-system" namespace to be "Ready" ...
	I0807 18:43:23.278710   12940 request.go:629] Waited for 198.4724ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mlf2g
	I0807 18:43:23.278955   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mlf2g
	I0807 18:43:23.279050   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:23.279050   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:23.279050   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:23.286501   12940 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0807 18:43:23.466811   12940 request.go:629] Waited for 178.9145ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:23.466946   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:23.466946   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:23.466946   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:23.466946   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:23.471530   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:43:23.472639   12940 pod_ready.go:92] pod "kube-proxy-mlf2g" in "kube-system" namespace has status "Ready":"True"
	I0807 18:43:23.472749   12940 pod_ready.go:81] duration metric: took 392.6612ms for pod "kube-proxy-mlf2g" in "kube-system" namespace to be "Ready" ...
	I0807 18:43:23.472749   12940 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-766300" in "kube-system" namespace to be "Ready" ...
	I0807 18:43:23.669830   12940 request.go:629] Waited for 196.8004ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-766300
	I0807 18:43:23.669830   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-766300
	I0807 18:43:23.669830   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:23.669830   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:23.669830   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:23.674477   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:43:23.871370   12940 request.go:629] Waited for 194.707ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/nodes/ha-766300
	I0807 18:43:23.871370   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300
	I0807 18:43:23.871370   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:23.871706   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:23.871706   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:23.879775   12940 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0807 18:43:23.882573   12940 pod_ready.go:92] pod "kube-scheduler-ha-766300" in "kube-system" namespace has status "Ready":"True"
	I0807 18:43:23.882573   12940 pod_ready.go:81] duration metric: took 409.8183ms for pod "kube-scheduler-ha-766300" in "kube-system" namespace to be "Ready" ...
	I0807 18:43:23.882573   12940 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-766300-m02" in "kube-system" namespace to be "Ready" ...
	I0807 18:43:24.074748   12940 request.go:629] Waited for 191.5798ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-766300-m02
	I0807 18:43:24.074748   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-766300-m02
	I0807 18:43:24.074748   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:24.074748   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:24.074748   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:24.079344   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:43:24.276529   12940 request.go:629] Waited for 195.2434ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:43:24.276863   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m02
	I0807 18:43:24.277008   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:24.277071   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:24.277071   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:24.282332   12940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 18:43:24.283671   12940 pod_ready.go:92] pod "kube-scheduler-ha-766300-m02" in "kube-system" namespace has status "Ready":"True"
	I0807 18:43:24.283671   12940 pod_ready.go:81] duration metric: took 401.0932ms for pod "kube-scheduler-ha-766300-m02" in "kube-system" namespace to be "Ready" ...
	I0807 18:43:24.283671   12940 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-766300-m03" in "kube-system" namespace to be "Ready" ...
	I0807 18:43:24.466003   12940 request.go:629] Waited for 182.3298ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-766300-m03
	I0807 18:43:24.466003   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-766300-m03
	I0807 18:43:24.466003   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:24.466003   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:24.466003   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:24.470380   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:43:24.670602   12940 request.go:629] Waited for 198.2502ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:24.670857   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes/ha-766300-m03
	I0807 18:43:24.670857   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:24.670857   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:24.670857   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:24.675918   12940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 18:43:24.677418   12940 pod_ready.go:92] pod "kube-scheduler-ha-766300-m03" in "kube-system" namespace has status "Ready":"True"
	I0807 18:43:24.677527   12940 pod_ready.go:81] duration metric: took 393.8507ms for pod "kube-scheduler-ha-766300-m03" in "kube-system" namespace to be "Ready" ...
	I0807 18:43:24.677527   12940 pod_ready.go:38] duration metric: took 5.1992106s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0807 18:43:24.677637   12940 api_server.go:52] waiting for apiserver process to appear ...
	I0807 18:43:24.689204   12940 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0807 18:43:24.720138   12940 api_server.go:72] duration metric: took 26.199119s to wait for apiserver process to appear ...
	I0807 18:43:24.720187   12940 api_server.go:88] waiting for apiserver healthz status ...
	I0807 18:43:24.720187   12940 api_server.go:253] Checking apiserver healthz at https://172.28.224.88:8443/healthz ...
	I0807 18:43:24.729167   12940 api_server.go:279] https://172.28.224.88:8443/healthz returned 200:
	ok
	I0807 18:43:24.729167   12940 round_trippers.go:463] GET https://172.28.224.88:8443/version
	I0807 18:43:24.729167   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:24.729167   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:24.729167   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:24.731170   12940 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 18:43:24.731234   12940 api_server.go:141] control plane version: v1.30.3
	I0807 18:43:24.731234   12940 api_server.go:131] duration metric: took 11.0465ms to wait for apiserver health ...
	I0807 18:43:24.731234   12940 system_pods.go:43] waiting for kube-system pods to appear ...
	I0807 18:43:24.871985   12940 request.go:629] Waited for 140.4152ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods
	I0807 18:43:24.871985   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods
	I0807 18:43:24.871985   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:24.871985   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:24.871985   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:24.881943   12940 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0807 18:43:24.892392   12940 system_pods.go:59] 24 kube-system pods found
	I0807 18:43:24.892392   12940 system_pods.go:61] "coredns-7db6d8ff4d-9tjv6" [54967df0-ac2c-4024-8947-b4e972a4b59a] Running
	I0807 18:43:24.892392   12940 system_pods.go:61] "coredns-7db6d8ff4d-fqjwg" [cc54cc3e-f40c-43c2-ac25-25bd315c3dd9] Running
	I0807 18:43:24.892392   12940 system_pods.go:61] "etcd-ha-766300" [5c619c4a-4fd5-494f-bb7b-80754258d40a] Running
	I0807 18:43:24.892392   12940 system_pods.go:61] "etcd-ha-766300-m02" [97b2b2f2-ea73-4de0-86aa-4854386b8f71] Running
	I0807 18:43:24.892392   12940 system_pods.go:61] "etcd-ha-766300-m03" [ddccee16-221c-4663-a38b-85a76115baf0] Running
	I0807 18:43:24.892392   12940 system_pods.go:61] "kindnet-6dc82" [d789c5c0-bde5-4abe-9bdd-515ce5c1a0f8] Running
	I0807 18:43:24.892392   12940 system_pods.go:61] "kindnet-gh6wt" [35666307-476d-460d-af1d-23d3bae8aec2] Running
	I0807 18:43:24.892392   12940 system_pods.go:61] "kindnet-scfzz" [ad036ebf-9679-47a6-b8e0-f433a34f55cb] Running
	I0807 18:43:24.892392   12940 system_pods.go:61] "kube-apiserver-ha-766300" [d1f122ef-d89f-4a4f-8194-86e5e84faea4] Running
	I0807 18:43:24.892392   12940 system_pods.go:61] "kube-apiserver-ha-766300-m02" [249c438f-592d-47ba-bf0b-252bde32a27d] Running
	I0807 18:43:24.892392   12940 system_pods.go:61] "kube-apiserver-ha-766300-m03" [27bb05ab-2345-469b-b8da-3f8c65d4c6cb] Running
	I0807 18:43:24.892392   12940 system_pods.go:61] "kube-controller-manager-ha-766300" [648bbb2b-06b4-487b-a9fa-c530a7ed5d11] Running
	I0807 18:43:24.892392   12940 system_pods.go:61] "kube-controller-manager-ha-766300-m02" [c8ab36c4-89ca-4519-8eaa-c27c00b78095] Running
	I0807 18:43:24.892392   12940 system_pods.go:61] "kube-controller-manager-ha-766300-m03" [91ce3e9c-5a16-483a-86cb-9eb67ae4825d] Running
	I0807 18:43:24.892392   12940 system_pods.go:61] "kube-proxy-8v6vm" [c6fa744a-fc9b-4da6-933a-866565e8318c] Running
	I0807 18:43:24.893784   12940 system_pods.go:61] "kube-proxy-d6ckx" [257858b0-6bb6-4bfb-9b5c-591fdb24929e] Running
	I0807 18:43:24.893784   12940 system_pods.go:61] "kube-proxy-mlf2g" [2b76f921-687d-4c43-bf2c-d3e8e5b865b2] Running
	I0807 18:43:24.893784   12940 system_pods.go:61] "kube-scheduler-ha-766300" [1d44914f-67d1-4b8f-934c-273d21dc7d60] Running
	I0807 18:43:24.893784   12940 system_pods.go:61] "kube-scheduler-ha-766300-m02" [22b9a1c1-e369-4270-90f6-f3caa10e0705] Running
	I0807 18:43:24.893784   12940 system_pods.go:61] "kube-scheduler-ha-766300-m03" [d32e668c-e2b9-42ed-944d-d3d4060c717b] Running
	I0807 18:43:24.893784   12940 system_pods.go:61] "kube-vip-ha-766300" [e2b31b5c-6e03-4e58-8cb4-10fc6869812b] Running
	I0807 18:43:24.893784   12940 system_pods.go:61] "kube-vip-ha-766300-m02" [0034d823-e21f-4be0-bbdb-09db13937fb7] Running
	I0807 18:43:24.893784   12940 system_pods.go:61] "kube-vip-ha-766300-m03" [cd71094c-0861-4ae6-86b3-051b3b3f8c63] Running
	I0807 18:43:24.893784   12940 system_pods.go:61] "storage-provisioner" [9a8a8ca1-bdd6-4ca8-a2d4-de3839223c9c] Running
	I0807 18:43:24.893784   12940 system_pods.go:74] duration metric: took 162.5482ms to wait for pod list to return data ...
	I0807 18:43:24.893784   12940 default_sa.go:34] waiting for default service account to be created ...
	I0807 18:43:25.074689   12940 request.go:629] Waited for 180.5999ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/namespaces/default/serviceaccounts
	I0807 18:43:25.074689   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/namespaces/default/serviceaccounts
	I0807 18:43:25.074689   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:25.074689   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:25.074689   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:25.079277   12940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:43:25.080589   12940 default_sa.go:45] found service account: "default"
	I0807 18:43:25.080589   12940 default_sa.go:55] duration metric: took 186.8029ms for default service account to be created ...
	I0807 18:43:25.080589   12940 system_pods.go:116] waiting for k8s-apps to be running ...
	I0807 18:43:25.264503   12940 request.go:629] Waited for 183.7581ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods
	I0807 18:43:25.264694   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/namespaces/kube-system/pods
	I0807 18:43:25.264694   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:25.264694   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:25.264694   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:25.273263   12940 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0807 18:43:25.284965   12940 system_pods.go:86] 24 kube-system pods found
	I0807 18:43:25.284965   12940 system_pods.go:89] "coredns-7db6d8ff4d-9tjv6" [54967df0-ac2c-4024-8947-b4e972a4b59a] Running
	I0807 18:43:25.284965   12940 system_pods.go:89] "coredns-7db6d8ff4d-fqjwg" [cc54cc3e-f40c-43c2-ac25-25bd315c3dd9] Running
	I0807 18:43:25.284965   12940 system_pods.go:89] "etcd-ha-766300" [5c619c4a-4fd5-494f-bb7b-80754258d40a] Running
	I0807 18:43:25.284965   12940 system_pods.go:89] "etcd-ha-766300-m02" [97b2b2f2-ea73-4de0-86aa-4854386b8f71] Running
	I0807 18:43:25.284965   12940 system_pods.go:89] "etcd-ha-766300-m03" [ddccee16-221c-4663-a38b-85a76115baf0] Running
	I0807 18:43:25.284965   12940 system_pods.go:89] "kindnet-6dc82" [d789c5c0-bde5-4abe-9bdd-515ce5c1a0f8] Running
	I0807 18:43:25.284965   12940 system_pods.go:89] "kindnet-gh6wt" [35666307-476d-460d-af1d-23d3bae8aec2] Running
	I0807 18:43:25.284965   12940 system_pods.go:89] "kindnet-scfzz" [ad036ebf-9679-47a6-b8e0-f433a34f55cb] Running
	I0807 18:43:25.284965   12940 system_pods.go:89] "kube-apiserver-ha-766300" [d1f122ef-d89f-4a4f-8194-86e5e84faea4] Running
	I0807 18:43:25.284965   12940 system_pods.go:89] "kube-apiserver-ha-766300-m02" [249c438f-592d-47ba-bf0b-252bde32a27d] Running
	I0807 18:43:25.284965   12940 system_pods.go:89] "kube-apiserver-ha-766300-m03" [27bb05ab-2345-469b-b8da-3f8c65d4c6cb] Running
	I0807 18:43:25.284965   12940 system_pods.go:89] "kube-controller-manager-ha-766300" [648bbb2b-06b4-487b-a9fa-c530a7ed5d11] Running
	I0807 18:43:25.284965   12940 system_pods.go:89] "kube-controller-manager-ha-766300-m02" [c8ab36c4-89ca-4519-8eaa-c27c00b78095] Running
	I0807 18:43:25.284965   12940 system_pods.go:89] "kube-controller-manager-ha-766300-m03" [91ce3e9c-5a16-483a-86cb-9eb67ae4825d] Running
	I0807 18:43:25.284965   12940 system_pods.go:89] "kube-proxy-8v6vm" [c6fa744a-fc9b-4da6-933a-866565e8318c] Running
	I0807 18:43:25.284965   12940 system_pods.go:89] "kube-proxy-d6ckx" [257858b0-6bb6-4bfb-9b5c-591fdb24929e] Running
	I0807 18:43:25.284965   12940 system_pods.go:89] "kube-proxy-mlf2g" [2b76f921-687d-4c43-bf2c-d3e8e5b865b2] Running
	I0807 18:43:25.284965   12940 system_pods.go:89] "kube-scheduler-ha-766300" [1d44914f-67d1-4b8f-934c-273d21dc7d60] Running
	I0807 18:43:25.284965   12940 system_pods.go:89] "kube-scheduler-ha-766300-m02" [22b9a1c1-e369-4270-90f6-f3caa10e0705] Running
	I0807 18:43:25.284965   12940 system_pods.go:89] "kube-scheduler-ha-766300-m03" [d32e668c-e2b9-42ed-944d-d3d4060c717b] Running
	I0807 18:43:25.284965   12940 system_pods.go:89] "kube-vip-ha-766300" [e2b31b5c-6e03-4e58-8cb4-10fc6869812b] Running
	I0807 18:43:25.284965   12940 system_pods.go:89] "kube-vip-ha-766300-m02" [0034d823-e21f-4be0-bbdb-09db13937fb7] Running
	I0807 18:43:25.284965   12940 system_pods.go:89] "kube-vip-ha-766300-m03" [cd71094c-0861-4ae6-86b3-051b3b3f8c63] Running
	I0807 18:43:25.284965   12940 system_pods.go:89] "storage-provisioner" [9a8a8ca1-bdd6-4ca8-a2d4-de3839223c9c] Running
	I0807 18:43:25.284965   12940 system_pods.go:126] duration metric: took 204.3729ms to wait for k8s-apps to be running ...
	I0807 18:43:25.284965   12940 system_svc.go:44] waiting for kubelet service to be running ....
	I0807 18:43:25.295880   12940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0807 18:43:25.321563   12940 system_svc.go:56] duration metric: took 36.598ms WaitForService to wait for kubelet
	I0807 18:43:25.321563   12940 kubeadm.go:582] duration metric: took 26.8005931s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0807 18:43:25.321563   12940 node_conditions.go:102] verifying NodePressure condition ...
	I0807 18:43:25.466442   12940 request.go:629] Waited for 144.8775ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.88:8443/api/v1/nodes
	I0807 18:43:25.466442   12940 round_trippers.go:463] GET https://172.28.224.88:8443/api/v1/nodes
	I0807 18:43:25.466442   12940 round_trippers.go:469] Request Headers:
	I0807 18:43:25.466442   12940 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:43:25.466442   12940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 18:43:25.472262   12940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 18:43:25.473818   12940 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0807 18:43:25.473874   12940 node_conditions.go:123] node cpu capacity is 2
	I0807 18:43:25.473874   12940 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0807 18:43:25.473874   12940 node_conditions.go:123] node cpu capacity is 2
	I0807 18:43:25.473874   12940 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0807 18:43:25.473874   12940 node_conditions.go:123] node cpu capacity is 2
	I0807 18:43:25.473874   12940 node_conditions.go:105] duration metric: took 152.3093ms to run NodePressure ...
	I0807 18:43:25.473967   12940 start.go:241] waiting for startup goroutines ...
	I0807 18:43:25.473996   12940 start.go:255] writing updated cluster config ...
	I0807 18:43:25.485867   12940 ssh_runner.go:195] Run: rm -f paused
	I0807 18:43:25.635342   12940 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0807 18:43:25.644678   12940 out.go:177] * Done! kubectl is now configured to use "ha-766300" cluster and "default" namespace by default
	
	
	==> Docker <==
	Aug 07 18:35:18 ha-766300 cri-dockerd[1322]: time="2024-08-07T18:35:18Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/12d6c9334d4425d43319143dec237fcd1d312fef7c677a9975134d01282056a6/resolv.conf as [nameserver 172.28.224.1]"
	Aug 07 18:35:18 ha-766300 cri-dockerd[1322]: time="2024-08-07T18:35:18Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/dde8345db34d686c4a2d04fd42f437311c2dff12db4e4dd99e35580a5452eb95/resolv.conf as [nameserver 172.28.224.1]"
	Aug 07 18:35:18 ha-766300 cri-dockerd[1322]: time="2024-08-07T18:35:18Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2a4270fc3f1c85a3f133cecec4a09f34590f6c234212ceba02843e977d9caa7f/resolv.conf as [nameserver 172.28.224.1]"
	Aug 07 18:35:18 ha-766300 dockerd[1431]: time="2024-08-07T18:35:18.691842981Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 18:35:18 ha-766300 dockerd[1431]: time="2024-08-07T18:35:18.692394613Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 18:35:18 ha-766300 dockerd[1431]: time="2024-08-07T18:35:18.692469918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 18:35:18 ha-766300 dockerd[1431]: time="2024-08-07T18:35:18.692647328Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 18:35:18 ha-766300 dockerd[1431]: time="2024-08-07T18:35:18.944485850Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 18:35:18 ha-766300 dockerd[1431]: time="2024-08-07T18:35:18.944877973Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 18:35:18 ha-766300 dockerd[1431]: time="2024-08-07T18:35:18.944907275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 18:35:18 ha-766300 dockerd[1431]: time="2024-08-07T18:35:18.946413663Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 18:35:18 ha-766300 dockerd[1431]: time="2024-08-07T18:35:18.984554192Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 18:35:18 ha-766300 dockerd[1431]: time="2024-08-07T18:35:18.984715702Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 18:35:18 ha-766300 dockerd[1431]: time="2024-08-07T18:35:18.984736103Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 18:35:18 ha-766300 dockerd[1431]: time="2024-08-07T18:35:18.984851310Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 18:44:05 ha-766300 dockerd[1431]: time="2024-08-07T18:44:05.936488548Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 18:44:05 ha-766300 dockerd[1431]: time="2024-08-07T18:44:05.937535814Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 18:44:05 ha-766300 dockerd[1431]: time="2024-08-07T18:44:05.937679123Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 18:44:05 ha-766300 dockerd[1431]: time="2024-08-07T18:44:05.938192155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 18:44:06 ha-766300 cri-dockerd[1322]: time="2024-08-07T18:44:06Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8fddb084e3687ad8a0d4294508da0d90d7fb78fa7e19d31c34592dc1b225afab/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Aug 07 18:44:07 ha-766300 cri-dockerd[1322]: time="2024-08-07T18:44:07Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Aug 07 18:44:07 ha-766300 dockerd[1431]: time="2024-08-07T18:44:07.878883437Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 18:44:07 ha-766300 dockerd[1431]: time="2024-08-07T18:44:07.879041138Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 18:44:07 ha-766300 dockerd[1431]: time="2024-08-07T18:44:07.879764643Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 18:44:07 ha-766300 dockerd[1431]: time="2024-08-07T18:44:07.880367647Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	23194f269aa45       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   17 minutes ago      Running             busybox                   0                   8fddb084e3687       busybox-fc5497c4f-bjlr2
	16929881bad0a       cbb01a7bd410d                                                                                         26 minutes ago      Running             coredns                   0                   2a4270fc3f1c8       coredns-7db6d8ff4d-9tjv6
	83c48e5354794       cbb01a7bd410d                                                                                         26 minutes ago      Running             coredns                   0                   dde8345db34d6       coredns-7db6d8ff4d-fqjwg
	3c1d664501256       6e38f40d628db                                                                                         26 minutes ago      Running             storage-provisioner       0                   12d6c9334d442       storage-provisioner
	da03949685ffc       kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3              26 minutes ago      Running             kindnet-cni               0                   b832453c59d79       kindnet-scfzz
	0d1a15c98c836       55bb025d2cfa5                                                                                         26 minutes ago      Running             kube-proxy                0                   3bb6abb82e815       kube-proxy-d6ckx
	dfcf346254418       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     27 minutes ago      Running             kube-vip                  0                   f692d837338a8       kube-vip-ha-766300
	a649001975784       3edc18e7b7672                                                                                         27 minutes ago      Running             kube-scheduler            0                   1bb59e814b31c       kube-scheduler-ha-766300
	f0640929d8e27       76932a3b37d7e                                                                                         27 minutes ago      Running             kube-controller-manager   0                   64dcc1244fc8e       kube-controller-manager-ha-766300
	507c64bcc82fe       1f6d574d502f3                                                                                         27 minutes ago      Running             kube-apiserver            0                   ec7864f9c3a86       kube-apiserver-ha-766300
	193edd22f66f2       3861cfcd7c04c                                                                                         27 minutes ago      Running             etcd                      0                   9df588292e306       etcd-ha-766300
	
	
	==> coredns [16929881bad0] <==
	[INFO] 10.244.0.4:44995 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.010623881s
	[INFO] 10.244.0.4:50470 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000176001s
	[INFO] 10.244.2.2:35902 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118301s
	[INFO] 10.244.2.2:43828 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000221402s
	[INFO] 10.244.2.2:54385 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000152001s
	[INFO] 10.244.2.2:54951 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000101201s
	[INFO] 10.244.1.2:47735 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000245002s
	[INFO] 10.244.1.2:42104 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0000699s
	[INFO] 10.244.1.2:56128 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000112801s
	[INFO] 10.244.1.2:52441 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000058201s
	[INFO] 10.244.1.2:38748 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000134701s
	[INFO] 10.244.1.2:52360 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000069401s
	[INFO] 10.244.0.4:57534 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000181801s
	[INFO] 10.244.0.4:58557 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000104201s
	[INFO] 10.244.2.2:55827 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000097801s
	[INFO] 10.244.2.2:45886 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000612s
	[INFO] 10.244.1.2:51840 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118401s
	[INFO] 10.244.1.2:34688 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000172301s
	[INFO] 10.244.0.4:43231 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000175001s
	[INFO] 10.244.0.4:44271 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000126801s
	[INFO] 10.244.0.4:40974 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000333603s
	[INFO] 10.244.2.2:55045 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000310003s
	[INFO] 10.244.1.2:57077 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000200802s
	[INFO] 10.244.1.2:54114 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000141001s
	[INFO] 10.244.1.2:48087 - 5 "PTR IN 1.224.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000108701s
	
	
	==> coredns [83c48e535479] <==
	[INFO] 127.0.0.1:35778 - 43758 "HINFO IN 3852137065385310320.8835117782204073892. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.045677289s
	[INFO] 10.244.0.4:53905 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000542804s
	[INFO] 10.244.0.4:59238 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.183525039s
	[INFO] 10.244.0.4:55003 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.078953377s
	[INFO] 10.244.2.2:33889 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000114901s
	[INFO] 10.244.0.4:40720 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00262562s
	[INFO] 10.244.0.4:36444 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000183002s
	[INFO] 10.244.0.4:43113 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000161602s
	[INFO] 10.244.2.2:42033 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.014107207s
	[INFO] 10.244.2.2:47908 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000299002s
	[INFO] 10.244.2.2:59253 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000158201s
	[INFO] 10.244.2.2:46148 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000095601s
	[INFO] 10.244.1.2:60723 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000141601s
	[INFO] 10.244.1.2:50356 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000665s
	[INFO] 10.244.0.4:43623 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000168401s
	[INFO] 10.244.0.4:57113 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000140402s
	[INFO] 10.244.2.2:36171 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000189301s
	[INFO] 10.244.2.2:58671 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000148802s
	[INFO] 10.244.1.2:51248 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000081201s
	[INFO] 10.244.1.2:33225 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000163701s
	[INFO] 10.244.0.4:35196 - 5 "PTR IN 1.224.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000162201s
	[INFO] 10.244.2.2:60165 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000247803s
	[INFO] 10.244.2.2:60957 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000109001s
	[INFO] 10.244.2.2:45736 - 5 "PTR IN 1.224.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000135401s
	[INFO] 10.244.1.2:52909 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000169902s
	
	
	==> describe nodes <==
	Name:               ha-766300
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-766300
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e
	                    minikube.k8s.io/name=ha-766300
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_07T18_34_45_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 07 Aug 2024 18:34:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-766300
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 07 Aug 2024 19:01:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 07 Aug 2024 18:59:32 +0000   Wed, 07 Aug 2024 18:34:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 07 Aug 2024 18:59:32 +0000   Wed, 07 Aug 2024 18:34:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 07 Aug 2024 18:59:32 +0000   Wed, 07 Aug 2024 18:34:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 07 Aug 2024 18:59:32 +0000   Wed, 07 Aug 2024 18:35:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.28.224.88
	  Hostname:    ha-766300
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 c5317959630842a6b7e0aa3810fe4295
	  System UUID:                5346e03b-026b-e04b-9201-e5a67ac4a16c
	  Boot ID:                    cac6f773-e394-492a-baf0-e6da55bb7dc7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-bjlr2              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 coredns-7db6d8ff4d-9tjv6             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     26m
	  kube-system                 coredns-7db6d8ff4d-fqjwg             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     26m
	  kube-system                 etcd-ha-766300                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         27m
	  kube-system                 kindnet-scfzz                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      26m
	  kube-system                 kube-apiserver-ha-766300             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-controller-manager-ha-766300    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-proxy-d6ckx                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 kube-scheduler-ha-766300             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-vip-ha-766300                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 26m                kube-proxy       
	  Normal  NodeHasSufficientPID     27m (x5 over 27m)  kubelet          Node ha-766300 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 27m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  27m (x5 over 27m)  kubelet          Node ha-766300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27m (x5 over 27m)  kubelet          Node ha-766300 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 27m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  27m                kubelet          Node ha-766300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27m                kubelet          Node ha-766300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27m                kubelet          Node ha-766300 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           26m                node-controller  Node ha-766300 event: Registered Node ha-766300 in Controller
	  Normal  NodeReady                26m                kubelet          Node ha-766300 status is now: NodeReady
	  Normal  RegisteredNode           22m                node-controller  Node ha-766300 event: Registered Node ha-766300 in Controller
	  Normal  RegisteredNode           18m                node-controller  Node ha-766300 event: Registered Node ha-766300 in Controller
	
	
	Name:               ha-766300-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-766300-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e
	                    minikube.k8s.io/name=ha-766300
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_07T18_38_50_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 07 Aug 2024 18:38:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-766300-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 07 Aug 2024 19:01:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 07 Aug 2024 18:59:42 +0000   Wed, 07 Aug 2024 18:38:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 07 Aug 2024 18:59:42 +0000   Wed, 07 Aug 2024 18:38:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 07 Aug 2024 18:59:42 +0000   Wed, 07 Aug 2024 18:38:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 07 Aug 2024 18:59:42 +0000   Wed, 07 Aug 2024 18:39:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.28.238.183
	  Hostname:    ha-766300-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 36f054ad468f42ab970f742479c45f7a
	  System UUID:                42dafca7-5b82-6143-bac4-f9c62f25a264
	  Boot ID:                    1fa0c76a-003c-4aaa-93e8-84f4d372b400
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-wf2xw                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 etcd-ha-766300-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         23m
	  kube-system                 kindnet-gh6wt                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      23m
	  kube-system                 kube-apiserver-ha-766300-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-controller-manager-ha-766300-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-proxy-8v6vm                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-scheduler-ha-766300-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-vip-ha-766300-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23m                kube-proxy       
	  Normal  NodeHasSufficientMemory  23m (x8 over 23m)  kubelet          Node ha-766300-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23m (x8 over 23m)  kubelet          Node ha-766300-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23m (x7 over 23m)  kubelet          Node ha-766300-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           23m                node-controller  Node ha-766300-m02 event: Registered Node ha-766300-m02 in Controller
	  Normal  RegisteredNode           22m                node-controller  Node ha-766300-m02 event: Registered Node ha-766300-m02 in Controller
	  Normal  RegisteredNode           18m                node-controller  Node ha-766300-m02 event: Registered Node ha-766300-m02 in Controller
	
	
	Name:               ha-766300-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-766300-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e
	                    minikube.k8s.io/name=ha-766300
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_07T18_42_58_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 07 Aug 2024 18:42:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-766300-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 07 Aug 2024 19:01:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 07 Aug 2024 18:59:41 +0000   Wed, 07 Aug 2024 18:42:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 07 Aug 2024 18:59:41 +0000   Wed, 07 Aug 2024 18:42:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 07 Aug 2024 18:59:41 +0000   Wed, 07 Aug 2024 18:42:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 07 Aug 2024 18:59:41 +0000   Wed, 07 Aug 2024 18:43:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.28.233.130
	  Hostname:    ha-766300-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 97728720ee544fcab7db4cd2bb62cd5d
	  System UUID:                f483d94a-ed8f-3149-ad04-955322a17cb0
	  Boot ID:                    b5cedbd0-2c12-4be8-a4d1-4f9d3be93238
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-vzv8c                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 etcd-ha-766300-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         18m
	  kube-system                 kindnet-6dc82                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      19m
	  kube-system                 kube-apiserver-ha-766300-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-controller-manager-ha-766300-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-proxy-mlf2g                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-scheduler-ha-766300-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-vip-ha-766300-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 18m                kube-proxy       
	  Normal  RegisteredNode           19m                node-controller  Node ha-766300-m03 event: Registered Node ha-766300-m03 in Controller
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node ha-766300-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node ha-766300-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)  kubelet          Node ha-766300-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           19m                node-controller  Node ha-766300-m03 event: Registered Node ha-766300-m03 in Controller
	  Normal  RegisteredNode           18m                node-controller  Node ha-766300-m03 event: Registered Node ha-766300-m03 in Controller
	
	
	Name:               ha-766300-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-766300-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e
	                    minikube.k8s.io/name=ha-766300
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_07T18_48_33_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 07 Aug 2024 18:48:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-766300-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 07 Aug 2024 19:01:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 07 Aug 2024 18:59:16 +0000   Wed, 07 Aug 2024 18:48:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 07 Aug 2024 18:59:16 +0000   Wed, 07 Aug 2024 18:48:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 07 Aug 2024 18:59:16 +0000   Wed, 07 Aug 2024 18:48:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 07 Aug 2024 18:59:16 +0000   Wed, 07 Aug 2024 18:49:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.28.229.155
	  Hostname:    ha-766300-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc4c2dd43147484daa36a59529775041
	  System UUID:                da6d66a3-618d-c243-99a3-17faa880c51e
	  Boot ID:                    123fef79-b553-47c1-9b5f-3ce3004e83e3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-mnhsw       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-rqdx6    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientMemory  13m (x2 over 13m)  kubelet          Node ha-766300-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x2 over 13m)  kubelet          Node ha-766300-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x2 over 13m)  kubelet          Node ha-766300-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node ha-766300-m04 event: Registered Node ha-766300-m04 in Controller
	  Normal  RegisteredNode           13m                node-controller  Node ha-766300-m04 event: Registered Node ha-766300-m04 in Controller
	  Normal  RegisteredNode           13m                node-controller  Node ha-766300-m04 event: Registered Node ha-766300-m04 in Controller
	  Normal  NodeReady                12m                kubelet          Node ha-766300-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +7.245015] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000010] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Aug 7 18:33] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.176570] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[Aug 7 18:34] systemd-fstab-generator[997]: Ignoring "noauto" option for root device
	[  +0.105023] kauditd_printk_skb: 65 callbacks suppressed
	[  +0.559767] systemd-fstab-generator[1036]: Ignoring "noauto" option for root device
	[  +0.197671] systemd-fstab-generator[1048]: Ignoring "noauto" option for root device
	[  +0.247347] systemd-fstab-generator[1062]: Ignoring "noauto" option for root device
	[  +2.899254] systemd-fstab-generator[1275]: Ignoring "noauto" option for root device
	[  +0.196827] systemd-fstab-generator[1287]: Ignoring "noauto" option for root device
	[  +0.207142] systemd-fstab-generator[1299]: Ignoring "noauto" option for root device
	[  +0.273415] systemd-fstab-generator[1314]: Ignoring "noauto" option for root device
	[ +12.041424] systemd-fstab-generator[1417]: Ignoring "noauto" option for root device
	[  +0.121430] kauditd_printk_skb: 202 callbacks suppressed
	[  +3.843613] systemd-fstab-generator[1676]: Ignoring "noauto" option for root device
	[  +6.261851] systemd-fstab-generator[1876]: Ignoring "noauto" option for root device
	[  +0.111987] kauditd_printk_skb: 70 callbacks suppressed
	[  +5.516703] kauditd_printk_skb: 67 callbacks suppressed
	[  +3.550326] systemd-fstab-generator[2372]: Ignoring "noauto" option for root device
	[ +15.057245] kauditd_printk_skb: 17 callbacks suppressed
	[Aug 7 18:35] kauditd_printk_skb: 29 callbacks suppressed
	[Aug 7 18:38] kauditd_printk_skb: 26 callbacks suppressed
	[  +2.160193] hrtimer: interrupt took 2279434 ns
	
	
	==> etcd [193edd22f66f] <==
	{"level":"info","ts":"2024-08-07T18:48:37.776188Z","caller":"traceutil/trace.go:171","msg":"trace[311912753] transaction","detail":"{read_only:false; response_revision:2628; number_of_response:1; }","duration":"228.282815ms","start":"2024-08-07T18:48:37.547887Z","end":"2024-08-07T18:48:37.77617Z","steps":["trace[311912753] 'process raft request'  (duration: 176.213306ms)","trace[311912753] 'compare'  (duration: 51.225084ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-07T18:48:43.277876Z","caller":"traceutil/trace.go:171","msg":"trace[1906562593] transaction","detail":"{read_only:false; response_revision:2645; number_of_response:1; }","duration":"117.339443ms","start":"2024-08-07T18:48:43.160519Z","end":"2024-08-07T18:48:43.277858Z","steps":["trace[1906562593] 'process raft request'  (duration: 95.297596ms)","trace[1906562593] 'compare'  (duration: 21.669536ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-07T18:48:43.491362Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"be852c5e1a2772b3","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"3.177962ms"}
	{"level":"warn","ts":"2024-08-07T18:48:43.491487Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"13eb58aa3c04c232","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"3.306166ms"}
	{"level":"info","ts":"2024-08-07T18:48:43.498449Z","caller":"traceutil/trace.go:171","msg":"trace[196034064] transaction","detail":"{read_only:false; response_revision:2646; number_of_response:1; }","duration":"158.873962ms","start":"2024-08-07T18:48:43.33956Z","end":"2024-08-07T18:48:43.498434Z","steps":["trace[196034064] 'process raft request'  (duration: 152.00416ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-07T18:48:48.830476Z","caller":"traceutil/trace.go:171","msg":"trace[862262936] linearizableReadLoop","detail":"{readStateIndex:3169; appliedIndex:3169; }","duration":"108.847327ms","start":"2024-08-07T18:48:48.721606Z","end":"2024-08-07T18:48:48.830453Z","steps":["trace[862262936] 'read index received'  (duration: 108.840326ms)","trace[862262936] 'applied index is now lower than readState.Index'  (duration: 5.101µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-07T18:48:48.830761Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.133236ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1110"}
	{"level":"info","ts":"2024-08-07T18:48:48.830797Z","caller":"traceutil/trace.go:171","msg":"trace[1571052655] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:2663; }","duration":"109.214538ms","start":"2024-08-07T18:48:48.721572Z","end":"2024-08-07T18:48:48.830787Z","steps":["trace[1571052655] 'agreement among raft nodes before linearized reading'  (duration: 109.115235ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-07T18:48:49.827664Z","caller":"traceutil/trace.go:171","msg":"trace[1301388366] transaction","detail":"{read_only:false; response_revision:2667; number_of_response:1; }","duration":"187.589673ms","start":"2024-08-07T18:48:49.640043Z","end":"2024-08-07T18:48:49.827632Z","steps":["trace[1301388366] 'process raft request'  (duration: 179.569835ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-07T18:48:49.847566Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"be852c5e1a2772b3","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"59.695415ms"}
	{"level":"warn","ts":"2024-08-07T18:48:49.847895Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"13eb58aa3c04c232","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"60.020525ms"}
	{"level":"info","ts":"2024-08-07T18:48:49.848304Z","caller":"traceutil/trace.go:171","msg":"trace[1790873566] transaction","detail":"{read_only:false; response_revision:2668; number_of_response:1; }","duration":"175.422811ms","start":"2024-08-07T18:48:49.67287Z","end":"2024-08-07T18:48:49.848293Z","steps":["trace[1790873566] 'process raft request'  (duration: 175.364409ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-07T18:48:50.049117Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.730852ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-07T18:48:50.049203Z","caller":"traceutil/trace.go:171","msg":"trace[1261713086] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2668; }","duration":"113.256468ms","start":"2024-08-07T18:48:49.935932Z","end":"2024-08-07T18:48:50.049188Z","steps":["trace[1261713086] 'range keys from in-memory index tree'  (duration: 110.994001ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-07T18:48:50.049739Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"110.384582ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-766300-m04\" ","response":"range_response_count:1 size:3114"}
	{"level":"info","ts":"2024-08-07T18:48:50.049794Z","caller":"traceutil/trace.go:171","msg":"trace[1579813736] range","detail":"{range_begin:/registry/minions/ha-766300-m04; range_end:; response_count:1; response_revision:2668; }","duration":"110.455285ms","start":"2024-08-07T18:48:49.939329Z","end":"2024-08-07T18:48:50.049784Z","steps":["trace[1579813736] 'range keys from in-memory index tree'  (duration: 109.142145ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-07T18:49:38.1564Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1970}
	{"level":"info","ts":"2024-08-07T18:49:38.221709Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1970,"took":"64.227089ms","hash":2380018877,"current-db-size-bytes":3657728,"current-db-size":"3.7 MB","current-db-size-in-use-bytes":2359296,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2024-08-07T18:49:38.221763Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2380018877,"revision":1970,"compact-revision":1066}
	{"level":"info","ts":"2024-08-07T18:54:38.223487Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2799}
	{"level":"info","ts":"2024-08-07T18:54:38.28433Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":2799,"took":"58.942151ms","hash":4073136354,"current-db-size-bytes":3657728,"current-db-size":"3.7 MB","current-db-size-in-use-bytes":2248704,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2024-08-07T18:54:38.284409Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4073136354,"revision":2799,"compact-revision":1970}
	{"level":"info","ts":"2024-08-07T18:59:38.25377Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":3543}
	{"level":"info","ts":"2024-08-07T18:59:38.314857Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":3543,"took":"58.477636ms","hash":3802570730,"current-db-size-bytes":3657728,"current-db-size":"3.7 MB","current-db-size-in-use-bytes":1949696,"current-db-size-in-use":"1.9 MB"}
	{"level":"info","ts":"2024-08-07T18:59:38.315111Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3802570730,"revision":3543,"compact-revision":2799}
	
	
	==> kernel <==
	 19:01:53 up 29 min,  0 users,  load average: 0.33, 0.56, 0.63
	Linux ha-766300 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [da03949685ff] <==
	I0807 19:01:17.025222       1 main.go:322] Node ha-766300-m02 has CIDR [10.244.1.0/24] 
	I0807 19:01:27.023925       1 main.go:295] Handling node with IPs: map[172.28.238.183:{}]
	I0807 19:01:27.024377       1 main.go:322] Node ha-766300-m02 has CIDR [10.244.1.0/24] 
	I0807 19:01:27.024801       1 main.go:295] Handling node with IPs: map[172.28.233.130:{}]
	I0807 19:01:27.024991       1 main.go:322] Node ha-766300-m03 has CIDR [10.244.2.0/24] 
	I0807 19:01:27.025317       1 main.go:295] Handling node with IPs: map[172.28.229.155:{}]
	I0807 19:01:27.025470       1 main.go:322] Node ha-766300-m04 has CIDR [10.244.3.0/24] 
	I0807 19:01:27.025567       1 main.go:295] Handling node with IPs: map[172.28.224.88:{}]
	I0807 19:01:27.025581       1 main.go:299] handling current node
	I0807 19:01:37.014668       1 main.go:295] Handling node with IPs: map[172.28.224.88:{}]
	I0807 19:01:37.014710       1 main.go:299] handling current node
	I0807 19:01:37.014729       1 main.go:295] Handling node with IPs: map[172.28.238.183:{}]
	I0807 19:01:37.014736       1 main.go:322] Node ha-766300-m02 has CIDR [10.244.1.0/24] 
	I0807 19:01:37.015259       1 main.go:295] Handling node with IPs: map[172.28.233.130:{}]
	I0807 19:01:37.015374       1 main.go:322] Node ha-766300-m03 has CIDR [10.244.2.0/24] 
	I0807 19:01:37.015475       1 main.go:295] Handling node with IPs: map[172.28.229.155:{}]
	I0807 19:01:37.015485       1 main.go:322] Node ha-766300-m04 has CIDR [10.244.3.0/24] 
	I0807 19:01:47.023187       1 main.go:295] Handling node with IPs: map[172.28.224.88:{}]
	I0807 19:01:47.023225       1 main.go:299] handling current node
	I0807 19:01:47.023243       1 main.go:295] Handling node with IPs: map[172.28.238.183:{}]
	I0807 19:01:47.023250       1 main.go:322] Node ha-766300-m02 has CIDR [10.244.1.0/24] 
	I0807 19:01:47.023610       1 main.go:295] Handling node with IPs: map[172.28.233.130:{}]
	I0807 19:01:47.023764       1 main.go:322] Node ha-766300-m03 has CIDR [10.244.2.0/24] 
	I0807 19:01:47.023843       1 main.go:295] Handling node with IPs: map[172.28.229.155:{}]
	I0807 19:01:47.023922       1 main.go:322] Node ha-766300-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [507c64bcc82f] <==
	I0807 18:34:43.803550       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0807 18:34:43.906468       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0807 18:34:43.956776       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0807 18:34:57.488851       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0807 18:34:57.771335       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0807 18:42:51.779639       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 11µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0807 18:42:51.779654       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0807 18:42:51.816625       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0807 18:42:51.865230       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0807 18:42:51.865697       1 timeout.go:142] post-timeout activity - time-elapsed: 152.032735ms, PATCH "/api/v1/namespaces/default/events/ha-766300-m03.17e986754b262b80" result: <nil>
	E0807 18:44:11.432809       1 conn.go:339] Error on socket receive: read tcp 172.28.239.254:8443->172.28.224.1:50986: use of closed network connection
	E0807 18:44:11.993184       1 conn.go:339] Error on socket receive: read tcp 172.28.239.254:8443->172.28.224.1:50988: use of closed network connection
	E0807 18:44:12.680856       1 conn.go:339] Error on socket receive: read tcp 172.28.239.254:8443->172.28.224.1:50990: use of closed network connection
	E0807 18:44:13.256191       1 conn.go:339] Error on socket receive: read tcp 172.28.239.254:8443->172.28.224.1:50992: use of closed network connection
	E0807 18:44:13.800993       1 conn.go:339] Error on socket receive: read tcp 172.28.239.254:8443->172.28.224.1:50994: use of closed network connection
	E0807 18:44:14.378819       1 conn.go:339] Error on socket receive: read tcp 172.28.239.254:8443->172.28.224.1:50996: use of closed network connection
	E0807 18:44:14.906373       1 conn.go:339] Error on socket receive: read tcp 172.28.239.254:8443->172.28.224.1:50998: use of closed network connection
	E0807 18:44:15.449732       1 conn.go:339] Error on socket receive: read tcp 172.28.239.254:8443->172.28.224.1:51000: use of closed network connection
	E0807 18:44:15.969673       1 conn.go:339] Error on socket receive: read tcp 172.28.239.254:8443->172.28.224.1:51002: use of closed network connection
	E0807 18:44:16.936153       1 conn.go:339] Error on socket receive: read tcp 172.28.239.254:8443->172.28.224.1:51005: use of closed network connection
	E0807 18:44:27.499343       1 conn.go:339] Error on socket receive: read tcp 172.28.239.254:8443->172.28.224.1:51007: use of closed network connection
	E0807 18:44:28.013415       1 conn.go:339] Error on socket receive: read tcp 172.28.239.254:8443->172.28.224.1:51010: use of closed network connection
	E0807 18:44:38.557430       1 conn.go:339] Error on socket receive: read tcp 172.28.239.254:8443->172.28.224.1:51012: use of closed network connection
	E0807 18:44:39.054033       1 conn.go:339] Error on socket receive: read tcp 172.28.239.254:8443->172.28.224.1:51015: use of closed network connection
	E0807 18:44:49.561313       1 conn.go:339] Error on socket receive: read tcp 172.28.239.254:8443->172.28.224.1:51017: use of closed network connection
	
	
	==> kube-controller-manager [f0640929d8e2] <==
	I0807 18:35:21.806687       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0807 18:38:45.681167       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-766300-m02\" does not exist"
	I0807 18:38:45.729734       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-766300-m02" podCIDRs=["10.244.1.0/24"]
	I0807 18:38:46.848512       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-766300-m02"
	I0807 18:42:50.875914       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-766300-m03\" does not exist"
	I0807 18:42:50.909950       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-766300-m03" podCIDRs=["10.244.2.0/24"]
	I0807 18:42:51.901007       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-766300-m03"
	I0807 18:44:04.835353       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="155.490627ms"
	I0807 18:44:05.117822       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="282.414968ms"
	I0807 18:44:05.361890       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="243.994865ms"
	I0807 18:44:05.415503       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.478621ms"
	I0807 18:44:05.416025       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="460.629µs"
	I0807 18:44:05.743993       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="189.25844ms"
	I0807 18:44:05.744565       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="509.732µs"
	I0807 18:44:08.240855       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="20.057742ms"
	I0807 18:44:08.241725       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="32.5µs"
	I0807 18:44:08.447525       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="99.075803ms"
	I0807 18:44:08.448231       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="530.804µs"
	I0807 18:44:08.578645       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="98.590198ms"
	I0807 18:44:08.579433       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="170.301µs"
	E0807 18:48:32.761294       1 certificate_controller.go:146] Sync csr-vnvw2 failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-vnvw2": the object has been modified; please apply your changes to the latest version and try again
	I0807 18:48:32.869704       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-766300-m04\" does not exist"
	I0807 18:48:32.964222       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-766300-m04" podCIDRs=["10.244.3.0/24"]
	I0807 18:48:36.998510       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-766300-m04"
	I0807 18:49:05.551030       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-766300-m04"
	
	
	==> kube-proxy [0d1a15c98c83] <==
	I0807 18:34:58.963339       1 server_linux.go:69] "Using iptables proxy"
	I0807 18:34:58.980292       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.28.224.88"]
	I0807 18:34:59.061540       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0807 18:34:59.061693       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0807 18:34:59.061754       1 server_linux.go:165] "Using iptables Proxier"
	I0807 18:34:59.065726       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0807 18:34:59.066407       1 server.go:872] "Version info" version="v1.30.3"
	I0807 18:34:59.066519       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0807 18:34:59.067988       1 config.go:192] "Starting service config controller"
	I0807 18:34:59.068028       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0807 18:34:59.068121       1 config.go:101] "Starting endpoint slice config controller"
	I0807 18:34:59.068133       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0807 18:34:59.068808       1 config.go:319] "Starting node config controller"
	I0807 18:34:59.068844       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0807 18:34:59.169255       1 shared_informer.go:320] Caches are synced for node config
	I0807 18:34:59.169317       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0807 18:34:59.169291       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [a64900197578] <==
	W0807 18:34:41.882748       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0807 18:34:41.883113       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0807 18:34:41.954032       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0807 18:34:41.954212       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0807 18:34:42.012056       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0807 18:34:42.012532       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0807 18:34:42.065921       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0807 18:34:42.065975       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0807 18:34:42.078139       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0807 18:34:42.078518       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0807 18:34:42.162830       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0807 18:34:42.162872       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0807 18:34:42.190521       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0807 18:34:42.190865       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0807 18:34:42.210057       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0807 18:34:42.210377       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0807 18:34:44.207344       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0807 18:44:04.856049       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-bjlr2\": pod busybox-fc5497c4f-bjlr2 is already assigned to node \"ha-766300\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-bjlr2" node="ha-766300"
	E0807 18:44:04.858389       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod a2c15ee6-19fe-4744-8b8e-419dcae7ca05(default/busybox-fc5497c4f-bjlr2) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-bjlr2"
	E0807 18:44:04.858968       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-bjlr2\": pod busybox-fc5497c4f-bjlr2 is already assigned to node \"ha-766300\"" pod="default/busybox-fc5497c4f-bjlr2"
	I0807 18:44:04.859186       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-bjlr2" node="ha-766300"
	E0807 18:48:33.074137       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-9rdw6\": pod kube-proxy-9rdw6 is already assigned to node \"ha-766300-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-9rdw6" node="ha-766300-m04"
	E0807 18:48:33.074832       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 27439927-bdbd-4129-a35e-bf60bc34b25d(kube-system/kube-proxy-9rdw6) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-9rdw6"
	E0807 18:48:33.074881       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-9rdw6\": pod kube-proxy-9rdw6 is already assigned to node \"ha-766300-m04\"" pod="kube-system/kube-proxy-9rdw6"
	I0807 18:48:33.074919       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-9rdw6" node="ha-766300-m04"
	
	
	==> kubelet <==
	Aug 07 18:57:43 ha-766300 kubelet[2378]: E0807 18:57:43.930170    2378 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 07 18:57:43 ha-766300 kubelet[2378]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 07 18:57:43 ha-766300 kubelet[2378]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 07 18:57:43 ha-766300 kubelet[2378]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 07 18:57:43 ha-766300 kubelet[2378]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 07 18:58:43 ha-766300 kubelet[2378]: E0807 18:58:43.932660    2378 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 07 18:58:43 ha-766300 kubelet[2378]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 07 18:58:43 ha-766300 kubelet[2378]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 07 18:58:43 ha-766300 kubelet[2378]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 07 18:58:43 ha-766300 kubelet[2378]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 07 18:59:43 ha-766300 kubelet[2378]: E0807 18:59:43.932561    2378 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 07 18:59:43 ha-766300 kubelet[2378]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 07 18:59:43 ha-766300 kubelet[2378]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 07 18:59:43 ha-766300 kubelet[2378]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 07 18:59:43 ha-766300 kubelet[2378]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 07 19:00:43 ha-766300 kubelet[2378]: E0807 19:00:43.932358    2378 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 07 19:00:43 ha-766300 kubelet[2378]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 07 19:00:43 ha-766300 kubelet[2378]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 07 19:00:43 ha-766300 kubelet[2378]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 07 19:00:43 ha-766300 kubelet[2378]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 07 19:01:43 ha-766300 kubelet[2378]: E0807 19:01:43.928590    2378 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 07 19:01:43 ha-766300 kubelet[2378]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 07 19:01:43 ha-766300 kubelet[2378]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 07 19:01:43 ha-766300 kubelet[2378]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 07 19:01:43 ha-766300 kubelet[2378]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0807 19:01:44.301294    9916 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-766300 -n ha-766300
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-766300 -n ha-766300: (13.2050756s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-766300 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/CopyFile FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/CopyFile (694.80s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (192.79s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-878600
mount_start_test.go:166: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p mount-start-2-878600: exit status 90 (3m0.4617135s)

                                                
                                                
-- stdout --
	* [mount-start-2-878600] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4717 Build 19045.4717
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19389
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting minikube without Kubernetes in cluster mount-start-2-878600
	* Restarting existing hyperv VM for "mount-start-2-878600" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0807 19:30:13.030183    5764 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 07 19:31:43 mount-start-2-878600 systemd[1]: Starting Docker Application Container Engine...
	Aug 07 19:31:43 mount-start-2-878600 dockerd[656]: time="2024-08-07T19:31:43.718179703Z" level=info msg="Starting up"
	Aug 07 19:31:43 mount-start-2-878600 dockerd[656]: time="2024-08-07T19:31:43.719424546Z" level=info msg="containerd not running, starting managed containerd"
	Aug 07 19:31:43 mount-start-2-878600 dockerd[656]: time="2024-08-07T19:31:43.720913498Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=663
	Aug 07 19:31:43 mount-start-2-878600 dockerd[663]: time="2024-08-07T19:31:43.754649367Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 07 19:31:43 mount-start-2-878600 dockerd[663]: time="2024-08-07T19:31:43.781189088Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 07 19:31:43 mount-start-2-878600 dockerd[663]: time="2024-08-07T19:31:43.781291491Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 07 19:31:43 mount-start-2-878600 dockerd[663]: time="2024-08-07T19:31:43.781362294Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 07 19:31:43 mount-start-2-878600 dockerd[663]: time="2024-08-07T19:31:43.781383395Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 07 19:31:43 mount-start-2-878600 dockerd[663]: time="2024-08-07T19:31:43.781817410Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 19:31:43 mount-start-2-878600 dockerd[663]: time="2024-08-07T19:31:43.781950914Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 07 19:31:43 mount-start-2-878600 dockerd[663]: time="2024-08-07T19:31:43.782247524Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 19:31:43 mount-start-2-878600 dockerd[663]: time="2024-08-07T19:31:43.782339028Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 07 19:31:43 mount-start-2-878600 dockerd[663]: time="2024-08-07T19:31:43.782360528Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 19:31:43 mount-start-2-878600 dockerd[663]: time="2024-08-07T19:31:43.782371329Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 07 19:31:43 mount-start-2-878600 dockerd[663]: time="2024-08-07T19:31:43.782838045Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 07 19:31:43 mount-start-2-878600 dockerd[663]: time="2024-08-07T19:31:43.783518069Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 07 19:31:43 mount-start-2-878600 dockerd[663]: time="2024-08-07T19:31:43.786356267Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 19:31:43 mount-start-2-878600 dockerd[663]: time="2024-08-07T19:31:43.786808683Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 07 19:31:43 mount-start-2-878600 dockerd[663]: time="2024-08-07T19:31:43.787021390Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 07 19:31:43 mount-start-2-878600 dockerd[663]: time="2024-08-07T19:31:43.787083392Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 07 19:31:43 mount-start-2-878600 dockerd[663]: time="2024-08-07T19:31:43.787823918Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 07 19:31:43 mount-start-2-878600 dockerd[663]: time="2024-08-07T19:31:43.787924021Z" level=info msg="metadata content store policy set" policy=shared
	Aug 07 19:31:43 mount-start-2-878600 dockerd[663]: time="2024-08-07T19:31:43.790065296Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 07 19:31:43 mount-start-2-878600 dockerd[663]: time="2024-08-07T19:31:43.790229201Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 07 19:31:43 mount-start-2-878600 dockerd[663]: time="2024-08-07T19:31:43.790300904Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 07 19:31:43 mount-start-2-878600 dockerd[663]: time="2024-08-07T19:31:43.790401607Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 07 19:31:43 mount-start-2-878600 dockerd[663]: time="2024-08-07T19:31:43.790422108Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 07 19:31:43 mount-start-2-878600 dockerd[663]: time="2024-08-07T19:31:43.790506811Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 07 19:31:43 mount-start-2-878600 dockerd[663]: time="2024-08-07T19:31:43.790972027Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 07 19:31:43 mount-start-2-878600 dockerd[663]: time="2024-08-07T19:31:43.791073131Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 07 19:31:43 mount-start-2-878600 dockerd[663]: time="2024-08-07T19:31:43.791213835Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 07 19:31:43 mount-start-2-878600 dockerd[663]: time="2024-08-07T19:31:43.791233936Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 07 19:31:43 mount-start-2-878600 dockerd[663]: time="2024-08-07T19:31:43.791246837Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 07 19:31:43 mount-start-2-878600 dockerd[663]: time="2024-08-07T19:31:43.791258537Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 07 19:31:43 mount-start-2-878600 dockerd[663]: time="2024-08-07T19:31:43.791274637Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 07 19:31:43 mount-start-2-878600 dockerd[663]: time="2024-08-07T19:31:43.791288238Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 07 19:31:43 mount-start-2-878600 dockerd[663]: time="2024-08-07T19:31:43.791301238Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 07 19:31:43 mount-start-2-878600 dockerd[663]: time="2024-08-07T19:31:43.791313739Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 07 19:31:43 mount-start-2-878600 dockerd[663]: time="2024-08-07T19:31:43.791330639Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 07 19:31:43 mount-start-2-878600 dockerd[663]: time="2024-08-07T19:31:43.791342940Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 07 19:31:43 mount-start-2-878600 dockerd[663]: time="2024-08-07T19:31:43.791360640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 07 19:31:43 mount-start-2-878600 dockerd[663]: time="2024-08-07T19:31:43.791373041Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 07 19:31:43 mount-start-2-878600 dockerd[663]: time="2024-08-07T19:31:43.791383741Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 07 19:31:43 mount-start-2-878600 dockerd[663]: time="2024-08-07T19:31:43.791395342Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 07 19:31:43 mount-start-2-878600 dockerd[663]: time="2024-08-07T19:31:43.791406442Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 07 19:31:43 mount-start-2-878600 dockerd[663]: time="2024-08-07T19:31:43.791418542Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 07 19:31:43 mount-start-2-878600 dockerd[663]: time="2024-08-07T19:31:43.791428843Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 07 19:31:43 mount-start-2-878600 dockerd[663]: time="2024-08-07T19:31:43.791444243Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 07 19:31:43 mount-start-2-878600 dockerd[663]: time="2024-08-07T19:31:43.791456144Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 07 19:31:43 mount-start-2-878600 dockerd[663]: time="2024-08-07T19:31:43.791469944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 07 19:31:43 mount-start-2-878600 dockerd[663]: time="2024-08-07T19:31:43.791480845Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 07 19:31:43 mount-start-2-878600 dockerd[663]: time="2024-08-07T19:31:43.791491645Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 07 19:31:43 mount-start-2-878600 dockerd[663]: time="2024-08-07T19:31:43.791503045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 07 19:31:43 mount-start-2-878600 dockerd[663]: time="2024-08-07T19:31:43.791516146Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 07 19:31:43 mount-start-2-878600 dockerd[663]: time="2024-08-07T19:31:43.791534447Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 07 19:31:43 mount-start-2-878600 dockerd[663]: time="2024-08-07T19:31:43.791548847Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 07 19:31:43 mount-start-2-878600 dockerd[663]: time="2024-08-07T19:31:43.791560347Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 07 19:31:43 mount-start-2-878600 dockerd[663]: time="2024-08-07T19:31:43.791777955Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 07 19:31:43 mount-start-2-878600 dockerd[663]: time="2024-08-07T19:31:43.791821756Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 07 19:31:43 mount-start-2-878600 dockerd[663]: time="2024-08-07T19:31:43.791836757Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 07 19:31:43 mount-start-2-878600 dockerd[663]: time="2024-08-07T19:31:43.791849757Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 07 19:31:43 mount-start-2-878600 dockerd[663]: time="2024-08-07T19:31:43.791860058Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 07 19:31:43 mount-start-2-878600 dockerd[663]: time="2024-08-07T19:31:43.791885959Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 07 19:31:43 mount-start-2-878600 dockerd[663]: time="2024-08-07T19:31:43.791899759Z" level=info msg="NRI interface is disabled by configuration."
	Aug 07 19:31:43 mount-start-2-878600 dockerd[663]: time="2024-08-07T19:31:43.792258672Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 07 19:31:43 mount-start-2-878600 dockerd[663]: time="2024-08-07T19:31:43.792505780Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 07 19:31:43 mount-start-2-878600 dockerd[663]: time="2024-08-07T19:31:43.792636985Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 07 19:31:43 mount-start-2-878600 dockerd[663]: time="2024-08-07T19:31:43.792687486Z" level=info msg="containerd successfully booted in 0.040857s"
	Aug 07 19:31:44 mount-start-2-878600 dockerd[656]: time="2024-08-07T19:31:44.772304680Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 07 19:31:44 mount-start-2-878600 dockerd[656]: time="2024-08-07T19:31:44.797881411Z" level=info msg="Loading containers: start."
	Aug 07 19:31:44 mount-start-2-878600 dockerd[656]: time="2024-08-07T19:31:44.953878983Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 07 19:31:45 mount-start-2-878600 dockerd[656]: time="2024-08-07T19:31:45.092956698Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 07 19:31:45 mount-start-2-878600 dockerd[656]: time="2024-08-07T19:31:45.192871629Z" level=info msg="Loading containers: done."
	Aug 07 19:31:45 mount-start-2-878600 dockerd[656]: time="2024-08-07T19:31:45.220277352Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Aug 07 19:31:45 mount-start-2-878600 dockerd[656]: time="2024-08-07T19:31:45.220990979Z" level=info msg="Daemon has completed initialization"
	Aug 07 19:31:45 mount-start-2-878600 systemd[1]: Started Docker Application Container Engine.
	Aug 07 19:31:45 mount-start-2-878600 dockerd[656]: time="2024-08-07T19:31:45.285250978Z" level=info msg="API listen on [::]:2376"
	Aug 07 19:31:45 mount-start-2-878600 dockerd[656]: time="2024-08-07T19:31:45.285761797Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 07 19:32:12 mount-start-2-878600 systemd[1]: Stopping Docker Application Container Engine...
	Aug 07 19:32:12 mount-start-2-878600 dockerd[656]: time="2024-08-07T19:32:12.201316731Z" level=info msg="Processing signal 'terminated'"
	Aug 07 19:32:12 mount-start-2-878600 dockerd[656]: time="2024-08-07T19:32:12.202933142Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 07 19:32:12 mount-start-2-878600 dockerd[656]: time="2024-08-07T19:32:12.203349045Z" level=info msg="Daemon shutdown complete"
	Aug 07 19:32:12 mount-start-2-878600 dockerd[656]: time="2024-08-07T19:32:12.203648047Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 07 19:32:12 mount-start-2-878600 dockerd[656]: time="2024-08-07T19:32:12.203671847Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 07 19:32:13 mount-start-2-878600 systemd[1]: docker.service: Deactivated successfully.
	Aug 07 19:32:13 mount-start-2-878600 systemd[1]: Stopped Docker Application Container Engine.
	Aug 07 19:32:13 mount-start-2-878600 systemd[1]: Starting Docker Application Container Engine...
	Aug 07 19:32:13 mount-start-2-878600 dockerd[1072]: time="2024-08-07T19:32:13.262790619Z" level=info msg="Starting up"
	Aug 07 19:33:13 mount-start-2-878600 dockerd[1072]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 07 19:33:13 mount-start-2-878600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 07 19:33:13 mount-start-2-878600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 07 19:33:13 mount-start-2-878600 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:168: restart failed: "out/minikube-windows-amd64.exe start -p mount-start-2-878600" : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p mount-start-2-878600 -n mount-start-2-878600
E0807 19:33:20.530955    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\client.crt: The system cannot find the path specified.
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p mount-start-2-878600 -n mount-start-2-878600: exit status 6 (12.3281359s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0807 19:33:13.507860    9288 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0807 19:33:25.650215    9288 status.go:417] kubeconfig endpoint: get endpoint: "mount-start-2-878600" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "mount-start-2-878600" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMountStart/serial/RestartStopped (192.79s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (58.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-116700 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-116700 -- exec busybox-fc5497c4f-jpc88 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-116700 -- exec busybox-fc5497c4f-jpc88 -- sh -c "ping -c 1 172.28.224.1"
multinode_test.go:583: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-116700 -- exec busybox-fc5497c4f-jpc88 -- sh -c "ping -c 1 172.28.224.1": exit status 1 (10.5141903s)

                                                
                                                
-- stdout --
	PING 172.28.224.1 (172.28.224.1): 56 data bytes
	
	--- 172.28.224.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0807 19:42:17.918840   12932 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:584: Failed to ping host (172.28.224.1) from pod (busybox-fc5497c4f-jpc88): exit status 1
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-116700 -- exec busybox-fc5497c4f-s4njd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-116700 -- exec busybox-fc5497c4f-s4njd -- sh -c "ping -c 1 172.28.224.1"
multinode_test.go:583: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-116700 -- exec busybox-fc5497c4f-s4njd -- sh -c "ping -c 1 172.28.224.1": exit status 1 (10.517928s)

                                                
                                                
-- stdout --
	PING 172.28.224.1 (172.28.224.1): 56 data bytes
	
	--- 172.28.224.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0807 19:42:28.962899   12368 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:584: Failed to ping host (172.28.224.1) from pod (busybox-fc5497c4f-s4njd): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-116700 -n multinode-116700
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-116700 -n multinode-116700: (12.6137021s)
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-116700 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-116700 logs -n 25: (8.9898987s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -p mount-start-2-878600                           | mount-start-2-878600 | minikube6\jenkins | v1.33.1 | 07 Aug 24 19:26 UTC | 07 Aug 24 19:28 UTC |
	|         | --memory=2048 --mount                             |                      |                   |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |                   |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |                   |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                   |                      |                   |         |                     |                     |
	| mount   | C:\Users\jenkins.minikube6:/minikube-host         | mount-start-2-878600 | minikube6\jenkins | v1.33.1 | 07 Aug 24 19:28 UTC |                     |
	|         | --profile mount-start-2-878600 --v 0              |                      |                   |         |                     |                     |
	|         | --9p-version 9p2000.L --gid 0 --ip                |                      |                   |         |                     |                     |
	|         | --msize 6543 --port 46465 --type 9p --uid         |                      |                   |         |                     |                     |
	|         |                                                 0 |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-878600 ssh -- ls                    | mount-start-2-878600 | minikube6\jenkins | v1.33.1 | 07 Aug 24 19:28 UTC | 07 Aug 24 19:29 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| delete  | -p mount-start-1-878600                           | mount-start-1-878600 | minikube6\jenkins | v1.33.1 | 07 Aug 24 19:29 UTC | 07 Aug 24 19:29 UTC |
	|         | --alsologtostderr -v=5                            |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-878600 ssh -- ls                    | mount-start-2-878600 | minikube6\jenkins | v1.33.1 | 07 Aug 24 19:29 UTC | 07 Aug 24 19:29 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| stop    | -p mount-start-2-878600                           | mount-start-2-878600 | minikube6\jenkins | v1.33.1 | 07 Aug 24 19:29 UTC | 07 Aug 24 19:30 UTC |
	| start   | -p mount-start-2-878600                           | mount-start-2-878600 | minikube6\jenkins | v1.33.1 | 07 Aug 24 19:30 UTC |                     |
	| delete  | -p mount-start-2-878600                           | mount-start-2-878600 | minikube6\jenkins | v1.33.1 | 07 Aug 24 19:33 UTC | 07 Aug 24 19:34 UTC |
	| delete  | -p mount-start-1-878600                           | mount-start-1-878600 | minikube6\jenkins | v1.33.1 | 07 Aug 24 19:34 UTC | 07 Aug 24 19:34 UTC |
	| start   | -p multinode-116700                               | multinode-116700     | minikube6\jenkins | v1.33.1 | 07 Aug 24 19:34 UTC | 07 Aug 24 19:41 UTC |
	|         | --wait=true --memory=2200                         |                      |                   |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                 |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                   |                      |                   |         |                     |                     |
	| kubectl | -p multinode-116700 -- apply -f                   | multinode-116700     | minikube6\jenkins | v1.33.1 | 07 Aug 24 19:42 UTC | 07 Aug 24 19:42 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |                   |         |                     |                     |
	| kubectl | -p multinode-116700 -- rollout                    | multinode-116700     | minikube6\jenkins | v1.33.1 | 07 Aug 24 19:42 UTC | 07 Aug 24 19:42 UTC |
	|         | status deployment/busybox                         |                      |                   |         |                     |                     |
	| kubectl | -p multinode-116700 -- get pods -o                | multinode-116700     | minikube6\jenkins | v1.33.1 | 07 Aug 24 19:42 UTC | 07 Aug 24 19:42 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-116700 -- get pods -o                | multinode-116700     | minikube6\jenkins | v1.33.1 | 07 Aug 24 19:42 UTC | 07 Aug 24 19:42 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-116700 -- exec                       | multinode-116700     | minikube6\jenkins | v1.33.1 | 07 Aug 24 19:42 UTC | 07 Aug 24 19:42 UTC |
	|         | busybox-fc5497c4f-jpc88 --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-116700 -- exec                       | multinode-116700     | minikube6\jenkins | v1.33.1 | 07 Aug 24 19:42 UTC | 07 Aug 24 19:42 UTC |
	|         | busybox-fc5497c4f-s4njd --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-116700 -- exec                       | multinode-116700     | minikube6\jenkins | v1.33.1 | 07 Aug 24 19:42 UTC | 07 Aug 24 19:42 UTC |
	|         | busybox-fc5497c4f-jpc88 --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-116700 -- exec                       | multinode-116700     | minikube6\jenkins | v1.33.1 | 07 Aug 24 19:42 UTC | 07 Aug 24 19:42 UTC |
	|         | busybox-fc5497c4f-s4njd --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-116700 -- exec                       | multinode-116700     | minikube6\jenkins | v1.33.1 | 07 Aug 24 19:42 UTC | 07 Aug 24 19:42 UTC |
	|         | busybox-fc5497c4f-jpc88 -- nslookup               |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-116700 -- exec                       | multinode-116700     | minikube6\jenkins | v1.33.1 | 07 Aug 24 19:42 UTC | 07 Aug 24 19:42 UTC |
	|         | busybox-fc5497c4f-s4njd -- nslookup               |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-116700 -- get pods -o                | multinode-116700     | minikube6\jenkins | v1.33.1 | 07 Aug 24 19:42 UTC | 07 Aug 24 19:42 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-116700 -- exec                       | multinode-116700     | minikube6\jenkins | v1.33.1 | 07 Aug 24 19:42 UTC | 07 Aug 24 19:42 UTC |
	|         | busybox-fc5497c4f-jpc88                           |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-116700 -- exec                       | multinode-116700     | minikube6\jenkins | v1.33.1 | 07 Aug 24 19:42 UTC |                     |
	|         | busybox-fc5497c4f-jpc88 -- sh                     |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.28.224.1                         |                      |                   |         |                     |                     |
	| kubectl | -p multinode-116700 -- exec                       | multinode-116700     | minikube6\jenkins | v1.33.1 | 07 Aug 24 19:42 UTC | 07 Aug 24 19:42 UTC |
	|         | busybox-fc5497c4f-s4njd                           |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-116700 -- exec                       | multinode-116700     | minikube6\jenkins | v1.33.1 | 07 Aug 24 19:42 UTC |                     |
	|         | busybox-fc5497c4f-s4njd -- sh                     |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.28.224.1                         |                      |                   |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/07 19:34:28
	Running on machine: minikube6
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0807 19:34:28.890794     956 out.go:291] Setting OutFile to fd 1276 ...
	I0807 19:34:28.891707     956 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 19:34:28.891707     956 out.go:304] Setting ErrFile to fd 1080...
	I0807 19:34:28.891707     956 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 19:34:28.915309     956 out.go:298] Setting JSON to false
	I0807 19:34:28.918024     956 start.go:129] hostinfo: {"hostname":"minikube6","uptime":321198,"bootTime":1722738070,"procs":187,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4717 Build 19045.4717","kernelVersion":"10.0.19045.4717 Build 19045.4717","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0807 19:34:28.919016     956 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0807 19:34:28.926386     956 out.go:177] * [multinode-116700] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4717 Build 19045.4717
	I0807 19:34:28.931503     956 notify.go:220] Checking for updates...
	I0807 19:34:28.931975     956 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0807 19:34:28.935143     956 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0807 19:34:28.940062     956 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0807 19:34:28.943122     956 out.go:177]   - MINIKUBE_LOCATION=19389
	I0807 19:34:28.945889     956 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0807 19:34:28.949803     956 config.go:182] Loaded profile config "ha-766300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 19:34:28.949803     956 driver.go:392] Setting default libvirt URI to qemu:///system
	I0807 19:34:34.394359     956 out.go:177] * Using the hyperv driver based on user configuration
	I0807 19:34:34.398098     956 start.go:297] selected driver: hyperv
	I0807 19:34:34.398731     956 start.go:901] validating driver "hyperv" against <nil>
	I0807 19:34:34.398731     956 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0807 19:34:34.448834     956 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0807 19:34:34.449844     956 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0807 19:34:34.449844     956 cni.go:84] Creating CNI manager for ""
	I0807 19:34:34.449844     956 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0807 19:34:34.449844     956 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0807 19:34:34.450471     956 start.go:340] cluster config:
	{Name:multinode-116700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-116700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 19:34:34.450597     956 iso.go:125] acquiring lock: {Name:mk51465eaa337f49a286b30986b5f3d5f63e6787 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 19:34:34.458204     956 out.go:177] * Starting "multinode-116700" primary control-plane node in "multinode-116700" cluster
	I0807 19:34:34.462142     956 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0807 19:34:34.462375     956 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0807 19:34:34.462444     956 cache.go:56] Caching tarball of preloaded images
	I0807 19:34:34.462942     956 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0807 19:34:34.462942     956 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0807 19:34:34.462942     956 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\config.json ...
	I0807 19:34:34.462942     956 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\config.json: {Name:mk602d54f28a74d907fd5d98e9a397b022da83ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 19:34:34.464863     956 start.go:360] acquireMachinesLock for multinode-116700: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 19:34:34.464968     956 start.go:364] duration metric: took 105.1µs to acquireMachinesLock for "multinode-116700"
	I0807 19:34:34.464968     956 start.go:93] Provisioning new machine with config: &{Name:multinode-116700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.3 ClusterName:multinode-116700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0807 19:34:34.464968     956 start.go:125] createHost starting for "" (driver="hyperv")
	I0807 19:34:34.469091     956 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0807 19:34:34.469636     956 start.go:159] libmachine.API.Create for "multinode-116700" (driver="hyperv")
	I0807 19:34:34.469741     956 client.go:168] LocalClient.Create starting
	I0807 19:34:34.469954     956 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0807 19:34:34.469954     956 main.go:141] libmachine: Decoding PEM data...
	I0807 19:34:34.470554     956 main.go:141] libmachine: Parsing certificate...
	I0807 19:34:34.470698     956 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0807 19:34:34.470924     956 main.go:141] libmachine: Decoding PEM data...
	I0807 19:34:34.470924     956 main.go:141] libmachine: Parsing certificate...
	I0807 19:34:34.471153     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0807 19:34:36.566477     956 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0807 19:34:36.566477     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:34:36.566554     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0807 19:34:38.322163     956 main.go:141] libmachine: [stdout =====>] : False
	
	I0807 19:34:38.322500     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:34:38.322500     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0807 19:34:39.807697     956 main.go:141] libmachine: [stdout =====>] : True
	
	I0807 19:34:39.807697     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:34:39.807697     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0807 19:34:43.510959     956 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0807 19:34:43.510959     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:34:43.514115     956 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0807 19:34:44.011598     956 main.go:141] libmachine: Creating SSH key...
	I0807 19:34:44.299966     956 main.go:141] libmachine: Creating VM...
	I0807 19:34:44.299966     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0807 19:34:47.254433     956 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0807 19:34:47.254433     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:34:47.254708     956 main.go:141] libmachine: Using switch "Default Switch"
	I0807 19:34:47.254798     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0807 19:34:49.035370     956 main.go:141] libmachine: [stdout =====>] : True
	
	I0807 19:34:49.035370     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:34:49.035370     956 main.go:141] libmachine: Creating VHD
	I0807 19:34:49.036437     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-116700\fixed.vhd' -SizeBytes 10MB -Fixed
	I0807 19:34:52.868415     956 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-116700\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 389F9846-1140-474D-A410-A153C4FE29C7
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0807 19:34:52.868595     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:34:52.868595     956 main.go:141] libmachine: Writing magic tar header
	I0807 19:34:52.868595     956 main.go:141] libmachine: Writing SSH key tar header
	I0807 19:34:52.878804     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-116700\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-116700\disk.vhd' -VHDType Dynamic -DeleteSource
	I0807 19:34:56.064301     956 main.go:141] libmachine: [stdout =====>] : 
	I0807 19:34:56.064301     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:34:56.064713     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-116700\disk.vhd' -SizeBytes 20000MB
	I0807 19:34:58.666534     956 main.go:141] libmachine: [stdout =====>] : 
	I0807 19:34:58.666534     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:34:58.666534     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-116700 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-116700' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0807 19:35:02.399610     956 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-116700 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0807 19:35:02.399610     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:35:02.400221     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-116700 -DynamicMemoryEnabled $false
	I0807 19:35:04.720267     956 main.go:141] libmachine: [stdout =====>] : 
	I0807 19:35:04.720517     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:35:04.720517     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-116700 -Count 2
	I0807 19:35:06.970488     956 main.go:141] libmachine: [stdout =====>] : 
	I0807 19:35:06.970488     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:35:06.970956     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-116700 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-116700\boot2docker.iso'
	I0807 19:35:09.588885     956 main.go:141] libmachine: [stdout =====>] : 
	I0807 19:35:09.588885     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:35:09.589902     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-116700 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-116700\disk.vhd'
	I0807 19:35:12.319244     956 main.go:141] libmachine: [stdout =====>] : 
	I0807 19:35:12.319244     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:35:12.319244     956 main.go:141] libmachine: Starting VM...
	I0807 19:35:12.320291     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-116700
	I0807 19:35:15.557532     956 main.go:141] libmachine: [stdout =====>] : 
	I0807 19:35:15.557532     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:35:15.558384     956 main.go:141] libmachine: Waiting for host to start...
	I0807 19:35:15.558590     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700 ).state
	I0807 19:35:17.873202     956 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 19:35:17.873802     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:35:17.873929     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700 ).networkadapters[0]).ipaddresses[0]
	I0807 19:35:20.475011     956 main.go:141] libmachine: [stdout =====>] : 
	I0807 19:35:20.475525     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:35:21.491064     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700 ).state
	I0807 19:35:23.734346     956 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 19:35:23.734397     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:35:23.734397     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700 ).networkadapters[0]).ipaddresses[0]
	I0807 19:35:26.336097     956 main.go:141] libmachine: [stdout =====>] : 
	I0807 19:35:26.336405     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:35:27.347984     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700 ).state
	I0807 19:35:29.628323     956 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 19:35:29.628976     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:35:29.629271     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700 ).networkadapters[0]).ipaddresses[0]
	I0807 19:35:32.195900     956 main.go:141] libmachine: [stdout =====>] : 
	I0807 19:35:32.195900     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:35:33.205311     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700 ).state
	I0807 19:35:35.524641     956 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 19:35:35.524641     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:35:35.524641     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700 ).networkadapters[0]).ipaddresses[0]
	I0807 19:35:38.143971     956 main.go:141] libmachine: [stdout =====>] : 
	I0807 19:35:38.143971     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:35:39.154694     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700 ).state
	I0807 19:35:41.472672     956 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 19:35:41.473273     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:35:41.473417     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700 ).networkadapters[0]).ipaddresses[0]
	I0807 19:35:44.118061     956 main.go:141] libmachine: [stdout =====>] : 172.28.224.86
	
	I0807 19:35:44.118340     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:35:44.118340     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700 ).state
	I0807 19:35:46.300727     956 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 19:35:46.300727     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:35:46.301120     956 machine.go:94] provisionDockerMachine start ...
	I0807 19:35:46.301120     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700 ).state
	I0807 19:35:48.517083     956 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 19:35:48.517267     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:35:48.517341     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700 ).networkadapters[0]).ipaddresses[0]
	I0807 19:35:51.124795     956 main.go:141] libmachine: [stdout =====>] : 172.28.224.86
	
	I0807 19:35:51.124795     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:35:51.130024     956 main.go:141] libmachine: Using SSH client type: native
	I0807 19:35:51.141972     956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.224.86 22 <nil> <nil>}
	I0807 19:35:51.141972     956 main.go:141] libmachine: About to run SSH command:
	hostname
	I0807 19:35:51.285932     956 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0807 19:35:51.285932     956 buildroot.go:166] provisioning hostname "multinode-116700"
	I0807 19:35:51.285932     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700 ).state
	I0807 19:35:53.504961     956 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 19:35:53.505340     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:35:53.505432     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700 ).networkadapters[0]).ipaddresses[0]
	I0807 19:35:56.138945     956 main.go:141] libmachine: [stdout =====>] : 172.28.224.86
	
	I0807 19:35:56.138945     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:35:56.145669     956 main.go:141] libmachine: Using SSH client type: native
	I0807 19:35:56.145669     956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.224.86 22 <nil> <nil>}
	I0807 19:35:56.145669     956 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-116700 && echo "multinode-116700" | sudo tee /etc/hostname
	I0807 19:35:56.304086     956 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-116700
	
	I0807 19:35:56.304086     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700 ).state
	I0807 19:35:58.510143     956 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 19:35:58.510143     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:35:58.510706     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700 ).networkadapters[0]).ipaddresses[0]
	I0807 19:36:01.121807     956 main.go:141] libmachine: [stdout =====>] : 172.28.224.86
	
	I0807 19:36:01.121906     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:36:01.127513     956 main.go:141] libmachine: Using SSH client type: native
	I0807 19:36:01.128274     956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.224.86 22 <nil> <nil>}
	I0807 19:36:01.128274     956 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-116700' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-116700/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-116700' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0807 19:36:01.277854     956 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0807 19:36:01.277854     956 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0807 19:36:01.277854     956 buildroot.go:174] setting up certificates
	I0807 19:36:01.277854     956 provision.go:84] configureAuth start
	I0807 19:36:01.277854     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700 ).state
	I0807 19:36:03.457344     956 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 19:36:03.457344     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:36:03.458022     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700 ).networkadapters[0]).ipaddresses[0]
	I0807 19:36:06.062030     956 main.go:141] libmachine: [stdout =====>] : 172.28.224.86
	
	I0807 19:36:06.062030     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:36:06.062389     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700 ).state
	I0807 19:36:08.279263     956 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 19:36:08.279263     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:36:08.279922     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700 ).networkadapters[0]).ipaddresses[0]
	I0807 19:36:10.906243     956 main.go:141] libmachine: [stdout =====>] : 172.28.224.86
	
	I0807 19:36:10.906243     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:36:10.906243     956 provision.go:143] copyHostCerts
	I0807 19:36:10.906243     956 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0807 19:36:10.907060     956 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0807 19:36:10.907191     956 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0807 19:36:10.907408     956 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0807 19:36:10.908212     956 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0807 19:36:10.908837     956 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0807 19:36:10.908929     956 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0807 19:36:10.909323     956 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0807 19:36:10.910444     956 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0807 19:36:10.910511     956 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0807 19:36:10.910511     956 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0807 19:36:10.910511     956 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0807 19:36:10.912055     956 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-116700 san=[127.0.0.1 172.28.224.86 localhost minikube multinode-116700]
	I0807 19:36:11.095198     956 provision.go:177] copyRemoteCerts
	I0807 19:36:11.106202     956 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0807 19:36:11.106202     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700 ).state
	I0807 19:36:13.296313     956 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 19:36:13.296553     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:36:13.296553     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700 ).networkadapters[0]).ipaddresses[0]
	I0807 19:36:15.924273     956 main.go:141] libmachine: [stdout =====>] : 172.28.224.86
	
	I0807 19:36:15.924273     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:36:15.924983     956 sshutil.go:53] new ssh client: &{IP:172.28.224.86 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-116700\id_rsa Username:docker}
	I0807 19:36:16.037124     956 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9308588s)
	I0807 19:36:16.037124     956 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0807 19:36:16.037749     956 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0807 19:36:16.087443     956 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0807 19:36:16.087595     956 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0807 19:36:16.131298     956 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0807 19:36:16.131679     956 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0807 19:36:16.175411     956 provision.go:87] duration metric: took 14.8973108s to configureAuth
	I0807 19:36:16.175469     956 buildroot.go:189] setting minikube options for container-runtime
	I0807 19:36:16.176053     956 config.go:182] Loaded profile config "multinode-116700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 19:36:16.176053     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700 ).state
	I0807 19:36:18.381324     956 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 19:36:18.381324     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:36:18.382019     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700 ).networkadapters[0]).ipaddresses[0]
	I0807 19:36:20.981509     956 main.go:141] libmachine: [stdout =====>] : 172.28.224.86
	
	I0807 19:36:20.982006     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:36:20.987257     956 main.go:141] libmachine: Using SSH client type: native
	I0807 19:36:20.987982     956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.224.86 22 <nil> <nil>}
	I0807 19:36:20.987982     956 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0807 19:36:21.130592     956 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0807 19:36:21.130676     956 buildroot.go:70] root file system type: tmpfs
	I0807 19:36:21.130830     956 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0807 19:36:21.130977     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700 ).state
	I0807 19:36:23.338389     956 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 19:36:23.338389     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:36:23.338522     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700 ).networkadapters[0]).ipaddresses[0]
	I0807 19:36:25.927985     956 main.go:141] libmachine: [stdout =====>] : 172.28.224.86
	
	I0807 19:36:25.927985     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:36:25.933044     956 main.go:141] libmachine: Using SSH client type: native
	I0807 19:36:25.933716     956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.224.86 22 <nil> <nil>}
	I0807 19:36:25.933716     956 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0807 19:36:26.101252     956 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0807 19:36:26.101382     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700 ).state
	I0807 19:36:28.278369     956 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 19:36:28.278777     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:36:28.278777     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700 ).networkadapters[0]).ipaddresses[0]
	I0807 19:36:30.897554     956 main.go:141] libmachine: [stdout =====>] : 172.28.224.86
	
	I0807 19:36:30.898024     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:36:30.904348     956 main.go:141] libmachine: Using SSH client type: native
	I0807 19:36:30.904348     956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.224.86 22 <nil> <nil>}
	I0807 19:36:30.904348     956 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0807 19:36:33.077319     956 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0807 19:36:33.077395     956 machine.go:97] duration metric: took 46.7756808s to provisionDockerMachine
	I0807 19:36:33.077395     956 client.go:171] duration metric: took 1m58.6061473s to LocalClient.Create
	I0807 19:36:33.077481     956 start.go:167] duration metric: took 1m58.6063388s to libmachine.API.Create "multinode-116700"
	I0807 19:36:33.077481     956 start.go:293] postStartSetup for "multinode-116700" (driver="hyperv")
	I0807 19:36:33.077481     956 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0807 19:36:33.089961     956 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0807 19:36:33.089961     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700 ).state
	I0807 19:36:35.272948     956 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 19:36:35.272948     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:36:35.272948     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700 ).networkadapters[0]).ipaddresses[0]
	I0807 19:36:37.872721     956 main.go:141] libmachine: [stdout =====>] : 172.28.224.86
	
	I0807 19:36:37.872721     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:36:37.872721     956 sshutil.go:53] new ssh client: &{IP:172.28.224.86 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-116700\id_rsa Username:docker}
	I0807 19:36:37.989240     956 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8991586s)
	I0807 19:36:38.002018     956 ssh_runner.go:195] Run: cat /etc/os-release
	I0807 19:36:38.009263     956 command_runner.go:130] > NAME=Buildroot
	I0807 19:36:38.009378     956 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0807 19:36:38.009378     956 command_runner.go:130] > ID=buildroot
	I0807 19:36:38.009378     956 command_runner.go:130] > VERSION_ID=2023.02.9
	I0807 19:36:38.009378     956 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0807 19:36:38.009499     956 info.go:137] Remote host: Buildroot 2023.02.9
	I0807 19:36:38.009499     956 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0807 19:36:38.010225     956 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0807 19:36:38.010939     956 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96602.pem -> 96602.pem in /etc/ssl/certs
	I0807 19:36:38.010939     956 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96602.pem -> /etc/ssl/certs/96602.pem
	I0807 19:36:38.024525     956 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0807 19:36:38.044496     956 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96602.pem --> /etc/ssl/certs/96602.pem (1708 bytes)
	I0807 19:36:38.103620     956 start.go:296] duration metric: took 5.0260755s for postStartSetup
	I0807 19:36:38.106810     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700 ).state
	I0807 19:36:40.284742     956 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 19:36:40.284742     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:36:40.285643     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700 ).networkadapters[0]).ipaddresses[0]
	I0807 19:36:42.919082     956 main.go:141] libmachine: [stdout =====>] : 172.28.224.86
	
	I0807 19:36:42.919645     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:36:42.919645     956 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\config.json ...
	I0807 19:36:42.923115     956 start.go:128] duration metric: took 2m8.4565151s to createHost
	I0807 19:36:42.923115     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700 ).state
	I0807 19:36:45.142158     956 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 19:36:45.143139     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:36:45.143293     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700 ).networkadapters[0]).ipaddresses[0]
	I0807 19:36:47.720844     956 main.go:141] libmachine: [stdout =====>] : 172.28.224.86
	
	I0807 19:36:47.720844     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:36:47.727423     956 main.go:141] libmachine: Using SSH client type: native
	I0807 19:36:47.728013     956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.224.86 22 <nil> <nil>}
	I0807 19:36:47.728013     956 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0807 19:36:47.869497     956 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723059407.890445181
	
	I0807 19:36:47.869497     956 fix.go:216] guest clock: 1723059407.890445181
	I0807 19:36:47.869497     956 fix.go:229] Guest: 2024-08-07 19:36:47.890445181 +0000 UTC Remote: 2024-08-07 19:36:42.9231155 +0000 UTC m=+134.198496201 (delta=4.967329681s)
	I0807 19:36:47.870100     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700 ).state
	I0807 19:36:50.056114     956 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 19:36:50.056114     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:36:50.057064     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700 ).networkadapters[0]).ipaddresses[0]
	I0807 19:36:52.665527     956 main.go:141] libmachine: [stdout =====>] : 172.28.224.86
	
	I0807 19:36:52.665527     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:36:52.671240     956 main.go:141] libmachine: Using SSH client type: native
	I0807 19:36:52.671830     956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.224.86 22 <nil> <nil>}
	I0807 19:36:52.671830     956 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1723059407
	I0807 19:36:52.811829     956 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Aug  7 19:36:47 UTC 2024
	
	I0807 19:36:52.811829     956 fix.go:236] clock set: Wed Aug  7 19:36:47 UTC 2024
	 (err=<nil>)
	I0807 19:36:52.811829     956 start.go:83] releasing machines lock for "multinode-116700", held for 2m18.3451035s
	I0807 19:36:52.811829     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700 ).state
	I0807 19:36:55.036173     956 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 19:36:55.036787     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:36:55.036787     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700 ).networkadapters[0]).ipaddresses[0]
	I0807 19:36:57.621649     956 main.go:141] libmachine: [stdout =====>] : 172.28.224.86
	
	I0807 19:36:57.622000     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:36:57.625891     956 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0807 19:36:57.626032     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700 ).state
	I0807 19:36:57.636112     956 ssh_runner.go:195] Run: cat /version.json
	I0807 19:36:57.637129     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700 ).state
	I0807 19:37:00.028354     956 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 19:37:00.028354     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:37:00.028354     956 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 19:37:00.029288     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:37:00.029288     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700 ).networkadapters[0]).ipaddresses[0]
	I0807 19:37:00.029288     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700 ).networkadapters[0]).ipaddresses[0]
	I0807 19:37:02.808844     956 main.go:141] libmachine: [stdout =====>] : 172.28.224.86
	
	I0807 19:37:02.809867     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:37:02.810480     956 sshutil.go:53] new ssh client: &{IP:172.28.224.86 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-116700\id_rsa Username:docker}
	I0807 19:37:02.828997     956 main.go:141] libmachine: [stdout =====>] : 172.28.224.86
	
	I0807 19:37:02.828997     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:37:02.828997     956 sshutil.go:53] new ssh client: &{IP:172.28.224.86 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-116700\id_rsa Username:docker}
	I0807 19:37:02.903035     956 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0807 19:37:02.904095     956 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.2780111s)
	W0807 19:37:02.904146     956 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0807 19:37:02.921276     956 command_runner.go:130] > {"iso_version": "v1.33.1-1722248113-19339", "kicbase_version": "v0.0.44-1721902582-19326", "minikube_version": "v1.33.1", "commit": "b8389556a97747a5bbaa1906d238251ad536d76e"}
	I0807 19:37:02.921276     956 ssh_runner.go:235] Completed: cat /version.json: (5.2850968s)
	I0807 19:37:02.933634     956 ssh_runner.go:195] Run: systemctl --version
	I0807 19:37:02.942134     956 command_runner.go:130] > systemd 252 (252)
	I0807 19:37:02.942134     956 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0807 19:37:02.958591     956 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0807 19:37:02.966887     956 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0807 19:37:02.967391     956 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0807 19:37:02.978769     956 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0807 19:37:03.007484     956 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0807 19:37:03.008019     956 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0807 19:37:03.008019     956 start.go:495] detecting cgroup driver to use...
	I0807 19:37:03.008330     956 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0807 19:37:03.031467     956 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0807 19:37:03.031467     956 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0807 19:37:03.046956     956 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0807 19:37:03.060195     956 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0807 19:37:03.094371     956 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0807 19:37:03.114663     956 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0807 19:37:03.125710     956 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0807 19:37:03.157753     956 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0807 19:37:03.187409     956 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0807 19:37:03.218029     956 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0807 19:37:03.249518     956 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0807 19:37:03.285932     956 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0807 19:37:03.317604     956 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0807 19:37:03.347331     956 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0807 19:37:03.380430     956 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0807 19:37:03.397475     956 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0807 19:37:03.407463     956 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0807 19:37:03.437765     956 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 19:37:03.632443     956 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0807 19:37:03.664553     956 start.go:495] detecting cgroup driver to use...
	I0807 19:37:03.678028     956 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0807 19:37:03.700429     956 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0807 19:37:03.700429     956 command_runner.go:130] > [Unit]
	I0807 19:37:03.700429     956 command_runner.go:130] > Description=Docker Application Container Engine
	I0807 19:37:03.700429     956 command_runner.go:130] > Documentation=https://docs.docker.com
	I0807 19:37:03.700548     956 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0807 19:37:03.700548     956 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0807 19:37:03.700548     956 command_runner.go:130] > StartLimitBurst=3
	I0807 19:37:03.700548     956 command_runner.go:130] > StartLimitIntervalSec=60
	I0807 19:37:03.700548     956 command_runner.go:130] > [Service]
	I0807 19:37:03.700548     956 command_runner.go:130] > Type=notify
	I0807 19:37:03.700548     956 command_runner.go:130] > Restart=on-failure
	I0807 19:37:03.700548     956 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0807 19:37:03.700548     956 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0807 19:37:03.700548     956 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0807 19:37:03.700548     956 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0807 19:37:03.700548     956 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0807 19:37:03.700548     956 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0807 19:37:03.700548     956 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0807 19:37:03.700548     956 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0807 19:37:03.700548     956 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0807 19:37:03.700548     956 command_runner.go:130] > ExecStart=
	I0807 19:37:03.700548     956 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0807 19:37:03.700548     956 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0807 19:37:03.700548     956 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0807 19:37:03.700548     956 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0807 19:37:03.700548     956 command_runner.go:130] > LimitNOFILE=infinity
	I0807 19:37:03.700548     956 command_runner.go:130] > LimitNPROC=infinity
	I0807 19:37:03.700548     956 command_runner.go:130] > LimitCORE=infinity
	I0807 19:37:03.700548     956 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0807 19:37:03.700548     956 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0807 19:37:03.700548     956 command_runner.go:130] > TasksMax=infinity
	I0807 19:37:03.700548     956 command_runner.go:130] > TimeoutStartSec=0
	I0807 19:37:03.700548     956 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0807 19:37:03.700548     956 command_runner.go:130] > Delegate=yes
	I0807 19:37:03.700548     956 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0807 19:37:03.700548     956 command_runner.go:130] > KillMode=process
	I0807 19:37:03.700548     956 command_runner.go:130] > [Install]
	I0807 19:37:03.700548     956 command_runner.go:130] > WantedBy=multi-user.target
	I0807 19:37:03.712894     956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0807 19:37:03.749954     956 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0807 19:37:03.789840     956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0807 19:37:03.823114     956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0807 19:37:03.856289     956 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0807 19:37:03.915448     956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0807 19:37:03.938644     956 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0807 19:37:03.972011     956 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0807 19:37:03.986323     956 ssh_runner.go:195] Run: which cri-dockerd
	I0807 19:37:03.991410     956 command_runner.go:130] > /usr/bin/cri-dockerd
	I0807 19:37:04.003937     956 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0807 19:37:04.022016     956 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0807 19:37:04.067584     956 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0807 19:37:04.263595     956 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0807 19:37:04.453227     956 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0807 19:37:04.453489     956 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0807 19:37:04.501021     956 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 19:37:04.713755     956 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0807 19:37:07.282718     956 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5678844s)
	I0807 19:37:07.296162     956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0807 19:37:07.332208     956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0807 19:37:07.368046     956 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0807 19:37:07.578114     956 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0807 19:37:07.783515     956 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 19:37:07.984151     956 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0807 19:37:08.023022     956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0807 19:37:08.056485     956 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 19:37:08.257040     956 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0807 19:37:08.379967     956 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0807 19:37:08.393134     956 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0807 19:37:08.405748     956 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0807 19:37:08.405748     956 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0807 19:37:08.405748     956 command_runner.go:130] > Device: 0,22	Inode: 886         Links: 1
	I0807 19:37:08.405748     956 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0807 19:37:08.405748     956 command_runner.go:130] > Access: 2024-08-07 19:37:08.299085244 +0000
	I0807 19:37:08.405848     956 command_runner.go:130] > Modify: 2024-08-07 19:37:08.299085244 +0000
	I0807 19:37:08.405848     956 command_runner.go:130] > Change: 2024-08-07 19:37:08.303085256 +0000
	I0807 19:37:08.405848     956 command_runner.go:130] >  Birth: -
	I0807 19:37:08.405848     956 start.go:563] Will wait 60s for crictl version
	I0807 19:37:08.418362     956 ssh_runner.go:195] Run: which crictl
	I0807 19:37:08.423687     956 command_runner.go:130] > /usr/bin/crictl
	I0807 19:37:08.435710     956 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0807 19:37:08.488453     956 command_runner.go:130] > Version:  0.1.0
	I0807 19:37:08.488453     956 command_runner.go:130] > RuntimeName:  docker
	I0807 19:37:08.488453     956 command_runner.go:130] > RuntimeVersion:  27.1.1
	I0807 19:37:08.488453     956 command_runner.go:130] > RuntimeApiVersion:  v1
	I0807 19:37:08.491599     956 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.1
	RuntimeApiVersion:  v1
	I0807 19:37:08.502042     956 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0807 19:37:08.537579     956 command_runner.go:130] > 27.1.1
	I0807 19:37:08.547280     956 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0807 19:37:08.577077     956 command_runner.go:130] > 27.1.1
	I0807 19:37:08.582358     956 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...
	I0807 19:37:08.582358     956 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0807 19:37:08.587410     956 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0807 19:37:08.587410     956 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0807 19:37:08.587410     956 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0807 19:37:08.587585     956 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:f6:3a:6a Flags:up|broadcast|multicast|running}
	I0807 19:37:08.589610     956 ip.go:210] interface addr: fe80::e7eb:b592:d388:ff99/64
	I0807 19:37:08.589610     956 ip.go:210] interface addr: 172.28.224.1/20
	I0807 19:37:08.601110     956 ssh_runner.go:195] Run: grep 172.28.224.1	host.minikube.internal$ /etc/hosts
	I0807 19:37:08.606958     956 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.28.224.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0807 19:37:08.628157     956 kubeadm.go:883] updating cluster {Name:multinode-116700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.3 ClusterName:multinode-116700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.224.86 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0807 19:37:08.628248     956 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0807 19:37:08.636558     956 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0807 19:37:08.657424     956 docker.go:685] Got preloaded images: 
	I0807 19:37:08.657424     956 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.3 wasn't preloaded
	I0807 19:37:08.670758     956 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0807 19:37:08.688907     956 command_runner.go:139] > {"Repositories":{}}
	I0807 19:37:08.703678     956 ssh_runner.go:195] Run: which lz4
	I0807 19:37:08.708703     956 command_runner.go:130] > /usr/bin/lz4
	I0807 19:37:08.709306     956 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0807 19:37:08.721901     956 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0807 19:37:08.727924     956 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0807 19:37:08.727924     956 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0807 19:37:08.727924     956 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359612007 bytes)
	I0807 19:37:10.734709     956 docker.go:649] duration metric: took 2.0251883s to copy over tarball
	I0807 19:37:10.747027     956 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0807 19:37:19.487306     956 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.7401679s)
	I0807 19:37:19.487481     956 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0807 19:37:19.554453     956 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0807 19:37:19.574065     956 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.12-0":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b":"sha256:3861cfcd7c04ccac1f062788eca
39487248527ef0c0cfd477a83d7691a75a899"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.30.3":"sha256:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d","registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c":"sha256:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.30.3":"sha256:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e","registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7":"sha256:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.30.3":"sha256:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1","registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65":"sha256:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d2
89d99da794784d1"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.30.3":"sha256:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2","registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4":"sha256:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0807 19:37:19.574065     956 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0807 19:37:19.620469     956 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 19:37:19.828811     956 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0807 19:37:23.199228     956 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.3703747s)
	I0807 19:37:23.208794     956 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0807 19:37:23.234201     956 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.3
	I0807 19:37:23.234201     956 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.3
	I0807 19:37:23.234201     956 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.3
	I0807 19:37:23.234201     956 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.3
	I0807 19:37:23.234201     956 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0807 19:37:23.234201     956 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0807 19:37:23.234201     956 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0807 19:37:23.234201     956 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0807 19:37:23.234201     956 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0807 19:37:23.234201     956 cache_images.go:84] Images are preloaded, skipping loading
	I0807 19:37:23.234201     956 kubeadm.go:934] updating node { 172.28.224.86 8443 v1.30.3 docker true true} ...
	I0807 19:37:23.234201     956 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-116700 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.28.224.86
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-116700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0807 19:37:23.243019     956 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0807 19:37:23.314027     956 command_runner.go:130] > cgroupfs
	I0807 19:37:23.315087     956 cni.go:84] Creating CNI manager for ""
	I0807 19:37:23.315087     956 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0807 19:37:23.315087     956 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0807 19:37:23.315087     956 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.28.224.86 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-116700 NodeName:multinode-116700 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.28.224.86"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.28.224.86 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0807 19:37:23.315087     956 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.28.224.86
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-116700"
	  kubeletExtraArgs:
	    node-ip: 172.28.224.86
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.28.224.86"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0807 19:37:23.326021     956 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0807 19:37:23.345688     956 command_runner.go:130] > kubeadm
	I0807 19:37:23.345688     956 command_runner.go:130] > kubectl
	I0807 19:37:23.345688     956 command_runner.go:130] > kubelet
	I0807 19:37:23.345792     956 binaries.go:44] Found k8s binaries, skipping transfer
	I0807 19:37:23.358629     956 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0807 19:37:23.378320     956 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0807 19:37:23.408001     956 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0807 19:37:23.437994     956 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0807 19:37:23.481982     956 ssh_runner.go:195] Run: grep 172.28.224.86	control-plane.minikube.internal$ /etc/hosts
	I0807 19:37:23.487986     956 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.28.224.86	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0807 19:37:23.519877     956 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 19:37:23.709953     956 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0807 19:37:23.738501     956 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700 for IP: 172.28.224.86
	I0807 19:37:23.738595     956 certs.go:194] generating shared ca certs ...
	I0807 19:37:23.738595     956 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 19:37:23.739634     956 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0807 19:37:23.740261     956 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0807 19:37:23.740515     956 certs.go:256] generating profile certs ...
	I0807 19:37:23.741296     956 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\client.key
	I0807 19:37:23.741460     956 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\client.crt with IP's: []
	I0807 19:37:23.902619     956 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\client.crt ...
	I0807 19:37:23.902619     956 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\client.crt: {Name:mk5703e31a2d7f1f4e74b0ed007ffb8442eede73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 19:37:23.903614     956 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\client.key ...
	I0807 19:37:23.903614     956 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\client.key: {Name:mk184a73ce3d594930ffd56b3d4006ca0e2c4fc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 19:37:23.905617     956 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\apiserver.key.2b831a87
	I0807 19:37:23.905617     956 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\apiserver.crt.2b831a87 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.28.224.86]
	I0807 19:37:24.405750     956 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\apiserver.crt.2b831a87 ...
	I0807 19:37:24.405750     956 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\apiserver.crt.2b831a87: {Name:mke948b39090e8466a0e39d41263aa6457c14ab5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 19:37:24.407908     956 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\apiserver.key.2b831a87 ...
	I0807 19:37:24.407908     956 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\apiserver.key.2b831a87: {Name:mke12f7d5218f89fc5b91cb64e8d673458900c97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 19:37:24.408291     956 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\apiserver.crt.2b831a87 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\apiserver.crt
	I0807 19:37:24.422569     956 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\apiserver.key.2b831a87 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\apiserver.key
	I0807 19:37:24.424466     956 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\proxy-client.key
	I0807 19:37:24.424466     956 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\proxy-client.crt with IP's: []
	I0807 19:37:24.586453     956 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\proxy-client.crt ...
	I0807 19:37:24.586453     956 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\proxy-client.crt: {Name:mk3ef97bc66ca15b19a9b199e0f3c0ab7fb779a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 19:37:24.588759     956 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\proxy-client.key ...
	I0807 19:37:24.588759     956 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\proxy-client.key: {Name:mk0b92b27f5fdef906eafec79479c9d1f440fa94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 19:37:24.590348     956 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0807 19:37:24.590553     956 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0807 19:37:24.590808     956 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0807 19:37:24.591143     956 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0807 19:37:24.591327     956 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0807 19:37:24.591553     956 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0807 19:37:24.591553     956 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0807 19:37:24.601315     956 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0807 19:37:24.602039     956 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9660.pem (1338 bytes)
	W0807 19:37:24.602785     956 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9660_empty.pem, impossibly tiny 0 bytes
	I0807 19:37:24.602785     956 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0807 19:37:24.603100     956 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0807 19:37:24.603419     956 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0807 19:37:24.603666     956 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0807 19:37:24.603969     956 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96602.pem (1708 bytes)
	I0807 19:37:24.603969     956 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9660.pem -> /usr/share/ca-certificates/9660.pem
	I0807 19:37:24.604691     956 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96602.pem -> /usr/share/ca-certificates/96602.pem
	I0807 19:37:24.604896     956 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0807 19:37:24.605219     956 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0807 19:37:24.655005     956 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0807 19:37:24.709217     956 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0807 19:37:24.754181     956 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0807 19:37:24.793875     956 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0807 19:37:24.841894     956 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0807 19:37:24.887468     956 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0807 19:37:24.936471     956 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0807 19:37:24.985167     956 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9660.pem --> /usr/share/ca-certificates/9660.pem (1338 bytes)
	I0807 19:37:25.030307     956 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96602.pem --> /usr/share/ca-certificates/96602.pem (1708 bytes)
	I0807 19:37:25.077076     956 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0807 19:37:25.121292     956 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0807 19:37:25.165407     956 ssh_runner.go:195] Run: openssl version
	I0807 19:37:25.173702     956 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0807 19:37:25.185690     956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9660.pem && ln -fs /usr/share/ca-certificates/9660.pem /etc/ssl/certs/9660.pem"
	I0807 19:37:25.215266     956 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9660.pem
	I0807 19:37:25.221254     956 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug  7 17:50 /usr/share/ca-certificates/9660.pem
	I0807 19:37:25.222250     956 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  7 17:50 /usr/share/ca-certificates/9660.pem
	I0807 19:37:25.233924     956 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9660.pem
	I0807 19:37:25.241919     956 command_runner.go:130] > 51391683
	I0807 19:37:25.254408     956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9660.pem /etc/ssl/certs/51391683.0"
	I0807 19:37:25.283375     956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/96602.pem && ln -fs /usr/share/ca-certificates/96602.pem /etc/ssl/certs/96602.pem"
	I0807 19:37:25.312380     956 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/96602.pem
	I0807 19:37:25.319376     956 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug  7 17:50 /usr/share/ca-certificates/96602.pem
	I0807 19:37:25.319376     956 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  7 17:50 /usr/share/ca-certificates/96602.pem
	I0807 19:37:25.330422     956 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/96602.pem
	I0807 19:37:25.339152     956 command_runner.go:130] > 3ec20f2e
	I0807 19:37:25.351264     956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/96602.pem /etc/ssl/certs/3ec20f2e.0"
	I0807 19:37:25.380203     956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0807 19:37:25.409147     956 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0807 19:37:25.417117     956 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug  7 17:33 /usr/share/ca-certificates/minikubeCA.pem
	I0807 19:37:25.417166     956 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  7 17:33 /usr/share/ca-certificates/minikubeCA.pem
	I0807 19:37:25.428380     956 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0807 19:37:25.436066     956 command_runner.go:130] > b5213941
	I0807 19:37:25.447633     956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0807 19:37:25.479079     956 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0807 19:37:25.486466     956 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0807 19:37:25.486584     956 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0807 19:37:25.486845     956 kubeadm.go:392] StartCluster: {Name:multinode-116700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.3 ClusterName:multinode-116700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.224.86 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 19:37:25.496098     956 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0807 19:37:25.532238     956 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0807 19:37:25.550230     956 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0807 19:37:25.550230     956 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0807 19:37:25.550370     956 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0807 19:37:25.563654     956 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0807 19:37:25.593260     956 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0807 19:37:25.608593     956 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0807 19:37:25.608593     956 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0807 19:37:25.608593     956 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0807 19:37:25.609153     956 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0807 19:37:25.609364     956 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0807 19:37:25.609364     956 kubeadm.go:157] found existing configuration files:
	
	I0807 19:37:25.620337     956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0807 19:37:25.635889     956 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0807 19:37:25.636288     956 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0807 19:37:25.648908     956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0807 19:37:25.679897     956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0807 19:37:25.696874     956 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0807 19:37:25.696874     956 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0807 19:37:25.708186     956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0807 19:37:25.739812     956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0807 19:37:25.755806     956 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0807 19:37:25.756428     956 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0807 19:37:25.768004     956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0807 19:37:25.797426     956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0807 19:37:25.815063     956 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0807 19:37:25.815063     956 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0807 19:37:25.826015     956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0807 19:37:25.845054     956 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0807 19:37:26.274307     956 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0807 19:37:26.274307     956 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0807 19:37:40.123904     956 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0807 19:37:40.123904     956 command_runner.go:130] > [init] Using Kubernetes version: v1.30.3
	I0807 19:37:40.124121     956 command_runner.go:130] > [preflight] Running pre-flight checks
	I0807 19:37:40.124121     956 kubeadm.go:310] [preflight] Running pre-flight checks
	I0807 19:37:40.124191     956 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0807 19:37:40.124191     956 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0807 19:37:40.124191     956 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0807 19:37:40.124191     956 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0807 19:37:40.124992     956 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0807 19:37:40.125061     956 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0807 19:37:40.125270     956 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0807 19:37:40.125328     956 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0807 19:37:40.130199     956 out.go:204]   - Generating certificates and keys ...
	I0807 19:37:40.130199     956 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0807 19:37:40.130199     956 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0807 19:37:40.130199     956 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0807 19:37:40.130199     956 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0807 19:37:40.130199     956 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0807 19:37:40.130199     956 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0807 19:37:40.130199     956 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0807 19:37:40.130199     956 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0807 19:37:40.130199     956 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0807 19:37:40.130199     956 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0807 19:37:40.131188     956 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0807 19:37:40.131188     956 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0807 19:37:40.131188     956 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0807 19:37:40.131188     956 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0807 19:37:40.131188     956 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-116700] and IPs [172.28.224.86 127.0.0.1 ::1]
	I0807 19:37:40.131188     956 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-116700] and IPs [172.28.224.86 127.0.0.1 ::1]
	I0807 19:37:40.131188     956 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0807 19:37:40.131188     956 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0807 19:37:40.131188     956 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-116700] and IPs [172.28.224.86 127.0.0.1 ::1]
	I0807 19:37:40.131188     956 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-116700] and IPs [172.28.224.86 127.0.0.1 ::1]
	I0807 19:37:40.132209     956 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0807 19:37:40.132209     956 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0807 19:37:40.132209     956 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0807 19:37:40.132209     956 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0807 19:37:40.132209     956 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0807 19:37:40.132209     956 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0807 19:37:40.132209     956 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0807 19:37:40.132209     956 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0807 19:37:40.132209     956 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0807 19:37:40.132209     956 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0807 19:37:40.132209     956 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0807 19:37:40.132209     956 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0807 19:37:40.132209     956 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0807 19:37:40.132209     956 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0807 19:37:40.133297     956 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0807 19:37:40.133297     956 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0807 19:37:40.133297     956 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0807 19:37:40.133297     956 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0807 19:37:40.133297     956 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0807 19:37:40.133297     956 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0807 19:37:40.133297     956 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0807 19:37:40.133297     956 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0807 19:37:40.137202     956 out.go:204]   - Booting up control plane ...
	I0807 19:37:40.137202     956 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0807 19:37:40.137202     956 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0807 19:37:40.137202     956 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0807 19:37:40.137202     956 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0807 19:37:40.137202     956 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0807 19:37:40.138292     956 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0807 19:37:40.138292     956 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0807 19:37:40.138292     956 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0807 19:37:40.138292     956 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0807 19:37:40.138292     956 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0807 19:37:40.138292     956 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0807 19:37:40.138292     956 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0807 19:37:40.139191     956 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0807 19:37:40.139191     956 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0807 19:37:40.139191     956 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0807 19:37:40.139191     956 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0807 19:37:40.139191     956 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.002057244s
	I0807 19:37:40.139191     956 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002057244s
	I0807 19:37:40.139191     956 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0807 19:37:40.139191     956 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0807 19:37:40.139191     956 command_runner.go:130] > [api-check] The API server is healthy after 7.502597138s
	I0807 19:37:40.139191     956 kubeadm.go:310] [api-check] The API server is healthy after 7.502597138s
	I0807 19:37:40.140192     956 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0807 19:37:40.140192     956 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0807 19:37:40.140192     956 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0807 19:37:40.140192     956 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0807 19:37:40.140192     956 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0807 19:37:40.140192     956 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0807 19:37:40.140192     956 kubeadm.go:310] [mark-control-plane] Marking the node multinode-116700 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0807 19:37:40.140192     956 command_runner.go:130] > [mark-control-plane] Marking the node multinode-116700 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0807 19:37:40.141202     956 kubeadm.go:310] [bootstrap-token] Using token: rfjytv.cbwlczjh6v2t5xt1
	I0807 19:37:40.141202     956 command_runner.go:130] > [bootstrap-token] Using token: rfjytv.cbwlczjh6v2t5xt1
	I0807 19:37:40.144193     956 out.go:204]   - Configuring RBAC rules ...
	I0807 19:37:40.144193     956 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0807 19:37:40.144193     956 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0807 19:37:40.144193     956 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0807 19:37:40.145215     956 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0807 19:37:40.145215     956 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0807 19:37:40.145215     956 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0807 19:37:40.145215     956 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0807 19:37:40.145215     956 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0807 19:37:40.145215     956 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0807 19:37:40.145215     956 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0807 19:37:40.146187     956 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0807 19:37:40.146187     956 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0807 19:37:40.146187     956 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0807 19:37:40.146187     956 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0807 19:37:40.146187     956 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0807 19:37:40.146187     956 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0807 19:37:40.146187     956 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0807 19:37:40.146187     956 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0807 19:37:40.146187     956 kubeadm.go:310] 
	I0807 19:37:40.146187     956 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0807 19:37:40.146187     956 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0807 19:37:40.146187     956 kubeadm.go:310] 
	I0807 19:37:40.146187     956 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0807 19:37:40.146187     956 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0807 19:37:40.146187     956 kubeadm.go:310] 
	I0807 19:37:40.146187     956 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0807 19:37:40.146187     956 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0807 19:37:40.146187     956 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0807 19:37:40.146187     956 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0807 19:37:40.146187     956 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0807 19:37:40.147187     956 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0807 19:37:40.147187     956 kubeadm.go:310] 
	I0807 19:37:40.147187     956 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0807 19:37:40.147187     956 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0807 19:37:40.147187     956 kubeadm.go:310] 
	I0807 19:37:40.147187     956 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0807 19:37:40.147187     956 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0807 19:37:40.147187     956 kubeadm.go:310] 
	I0807 19:37:40.147187     956 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0807 19:37:40.147187     956 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0807 19:37:40.147187     956 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0807 19:37:40.147187     956 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0807 19:37:40.147187     956 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0807 19:37:40.147187     956 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0807 19:37:40.147187     956 kubeadm.go:310] 
	I0807 19:37:40.147187     956 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0807 19:37:40.147187     956 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0807 19:37:40.147187     956 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0807 19:37:40.147187     956 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0807 19:37:40.147187     956 kubeadm.go:310] 
	I0807 19:37:40.147187     956 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token rfjytv.cbwlczjh6v2t5xt1 \
	I0807 19:37:40.147187     956 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token rfjytv.cbwlczjh6v2t5xt1 \
	I0807 19:37:40.147187     956 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:54111b4769238fc338d0511ba57c177f732a6d68c0b9cd0aa8cacbbf3c79643b \
	I0807 19:37:40.147187     956 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:54111b4769238fc338d0511ba57c177f732a6d68c0b9cd0aa8cacbbf3c79643b \
	I0807 19:37:40.147187     956 kubeadm.go:310] 	--control-plane 
	I0807 19:37:40.147187     956 command_runner.go:130] > 	--control-plane 
	I0807 19:37:40.147187     956 kubeadm.go:310] 
	I0807 19:37:40.148187     956 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0807 19:37:40.148187     956 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0807 19:37:40.148187     956 kubeadm.go:310] 
	I0807 19:37:40.148187     956 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token rfjytv.cbwlczjh6v2t5xt1 \
	I0807 19:37:40.148187     956 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token rfjytv.cbwlczjh6v2t5xt1 \
	I0807 19:37:40.148187     956 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:54111b4769238fc338d0511ba57c177f732a6d68c0b9cd0aa8cacbbf3c79643b 
	I0807 19:37:40.148187     956 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:54111b4769238fc338d0511ba57c177f732a6d68c0b9cd0aa8cacbbf3c79643b 
	I0807 19:37:40.148187     956 cni.go:84] Creating CNI manager for ""
	I0807 19:37:40.148187     956 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0807 19:37:40.151197     956 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0807 19:37:40.166760     956 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0807 19:37:40.175196     956 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0807 19:37:40.175196     956 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I0807 19:37:40.175196     956 command_runner.go:130] > Device: 0,17	Inode: 3500        Links: 1
	I0807 19:37:40.175625     956 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0807 19:37:40.175625     956 command_runner.go:130] > Access: 2024-08-07 19:35:40.732658500 +0000
	I0807 19:37:40.175625     956 command_runner.go:130] > Modify: 2024-07-29 16:10:03.000000000 +0000
	I0807 19:37:40.175625     956 command_runner.go:130] > Change: 2024-08-07 19:35:32.102000000 +0000
	I0807 19:37:40.175625     956 command_runner.go:130] >  Birth: -
	I0807 19:37:40.175625     956 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0807 19:37:40.175790     956 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0807 19:37:40.219776     956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0807 19:37:40.873452     956 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0807 19:37:40.873452     956 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0807 19:37:40.873650     956 command_runner.go:130] > serviceaccount/kindnet created
	I0807 19:37:40.873650     956 command_runner.go:130] > daemonset.apps/kindnet created
	I0807 19:37:40.873650     956 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0807 19:37:40.887797     956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-116700 minikube.k8s.io/updated_at=2024_08_07T19_37_40_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e minikube.k8s.io/name=multinode-116700 minikube.k8s.io/primary=true
	I0807 19:37:40.887797     956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 19:37:40.900754     956 command_runner.go:130] > -16
	I0807 19:37:40.900796     956 ops.go:34] apiserver oom_adj: -16
	I0807 19:37:41.030807     956 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0807 19:37:41.047804     956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 19:37:41.072803     956 command_runner.go:130] > node/multinode-116700 labeled
	I0807 19:37:41.169312     956 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0807 19:37:41.552398     956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 19:37:41.663979     956 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0807 19:37:42.052754     956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 19:37:42.155775     956 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0807 19:37:42.552063     956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 19:37:42.654334     956 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0807 19:37:43.054980     956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 19:37:43.160838     956 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0807 19:37:43.559798     956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 19:37:43.678510     956 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0807 19:37:44.057650     956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 19:37:44.173346     956 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0807 19:37:44.562029     956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 19:37:44.664954     956 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0807 19:37:45.063011     956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 19:37:45.174430     956 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0807 19:37:45.550549     956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 19:37:45.662541     956 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0807 19:37:46.053564     956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 19:37:46.161136     956 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0807 19:37:46.552236     956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 19:37:46.653523     956 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0807 19:37:47.064703     956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 19:37:47.177488     956 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0807 19:37:47.567762     956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 19:37:47.682830     956 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0807 19:37:48.060750     956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 19:37:48.172064     956 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0807 19:37:48.561729     956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 19:37:48.668675     956 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0807 19:37:49.060737     956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 19:37:49.171242     956 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0807 19:37:49.561621     956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 19:37:49.682438     956 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0807 19:37:50.063559     956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 19:37:50.174887     956 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0807 19:37:50.557586     956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 19:37:50.663640     956 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0807 19:37:51.059319     956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 19:37:51.183247     956 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0807 19:37:51.559966     956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 19:37:51.674368     956 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0807 19:37:52.060824     956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 19:37:52.209326     956 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0807 19:37:52.557359     956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 19:37:52.663732     956 command_runner.go:130] > NAME      SECRETS   AGE
	I0807 19:37:52.664683     956 command_runner.go:130] > default   0         0s
	I0807 19:37:52.664683     956 kubeadm.go:1113] duration metric: took 11.7908825s to wait for elevateKubeSystemPrivileges
	I0807 19:37:52.664683     956 kubeadm.go:394] duration metric: took 27.1774923s to StartCluster
	I0807 19:37:52.664683     956 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 19:37:52.664683     956 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0807 19:37:52.667049     956 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 19:37:52.668987     956 start.go:235] Will wait 6m0s for node &{Name: IP:172.28.224.86 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0807 19:37:52.668987     956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0807 19:37:52.668987     956 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0807 19:37:52.669344     956 addons.go:69] Setting storage-provisioner=true in profile "multinode-116700"
	I0807 19:37:52.669454     956 addons.go:234] Setting addon storage-provisioner=true in "multinode-116700"
	I0807 19:37:52.669454     956 host.go:66] Checking if "multinode-116700" exists ...
	I0807 19:37:52.669454     956 config.go:182] Loaded profile config "multinode-116700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 19:37:52.669454     956 addons.go:69] Setting default-storageclass=true in profile "multinode-116700"
	I0807 19:37:52.669454     956 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-116700"
	I0807 19:37:52.671521     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700 ).state
	I0807 19:37:52.671707     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700 ).state
	I0807 19:37:52.674438     956 out.go:177] * Verifying Kubernetes components...
	I0807 19:37:52.694745     956 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 19:37:52.896882     956 command_runner.go:130] > apiVersion: v1
	I0807 19:37:52.896882     956 command_runner.go:130] > data:
	I0807 19:37:52.896882     956 command_runner.go:130] >   Corefile: |
	I0807 19:37:52.896988     956 command_runner.go:130] >     .:53 {
	I0807 19:37:52.896988     956 command_runner.go:130] >         errors
	I0807 19:37:52.896988     956 command_runner.go:130] >         health {
	I0807 19:37:52.896988     956 command_runner.go:130] >            lameduck 5s
	I0807 19:37:52.896988     956 command_runner.go:130] >         }
	I0807 19:37:52.896988     956 command_runner.go:130] >         ready
	I0807 19:37:52.896988     956 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0807 19:37:52.897043     956 command_runner.go:130] >            pods insecure
	I0807 19:37:52.897043     956 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0807 19:37:52.897043     956 command_runner.go:130] >            ttl 30
	I0807 19:37:52.897043     956 command_runner.go:130] >         }
	I0807 19:37:52.897043     956 command_runner.go:130] >         prometheus :9153
	I0807 19:37:52.897043     956 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0807 19:37:52.897043     956 command_runner.go:130] >            max_concurrent 1000
	I0807 19:37:52.897122     956 command_runner.go:130] >         }
	I0807 19:37:52.897122     956 command_runner.go:130] >         cache 30
	I0807 19:37:52.897122     956 command_runner.go:130] >         loop
	I0807 19:37:52.897122     956 command_runner.go:130] >         reload
	I0807 19:37:52.897122     956 command_runner.go:130] >         loadbalance
	I0807 19:37:52.897122     956 command_runner.go:130] >     }
	I0807 19:37:52.897122     956 command_runner.go:130] > kind: ConfigMap
	I0807 19:37:52.897122     956 command_runner.go:130] > metadata:
	I0807 19:37:52.897195     956 command_runner.go:130] >   creationTimestamp: "2024-08-07T19:37:39Z"
	I0807 19:37:52.897195     956 command_runner.go:130] >   name: coredns
	I0807 19:37:52.897195     956 command_runner.go:130] >   namespace: kube-system
	I0807 19:37:52.897195     956 command_runner.go:130] >   resourceVersion: "269"
	I0807 19:37:52.897195     956 command_runner.go:130] >   uid: 50ec09a5-08ef-4ec9-9f32-41c17ac31721
	I0807 19:37:52.897448     956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.28.224.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0807 19:37:53.027019     956 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0807 19:37:53.454203     956 command_runner.go:130] > configmap/coredns replaced
	I0807 19:37:53.455426     956 start.go:971] {"host.minikube.internal": 172.28.224.1} host record injected into CoreDNS's ConfigMap
	I0807 19:37:53.456552     956 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0807 19:37:53.456552     956 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0807 19:37:53.457342     956 kapi.go:59] client config for multinode-116700: &rest.Config{Host:"https://172.28.224.86:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-116700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-116700\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1da64c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0807 19:37:53.457342     956 kapi.go:59] client config for multinode-116700: &rest.Config{Host:"https://172.28.224.86:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-116700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-116700\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1da64c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0807 19:37:53.459815     956 cert_rotation.go:137] Starting client certificate rotation controller
	I0807 19:37:53.460419     956 node_ready.go:35] waiting up to 6m0s for node "multinode-116700" to be "Ready" ...
	I0807 19:37:53.460539     956 round_trippers.go:463] GET https://172.28.224.86:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0807 19:37:53.460539     956 round_trippers.go:469] Request Headers:
	I0807 19:37:53.460611     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700
	I0807 19:37:53.460611     956 round_trippers.go:469] Request Headers:
	I0807 19:37:53.460611     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:37:53.460744     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:37:53.460611     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:37:53.460778     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:37:53.479552     956 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0807 19:37:53.480659     956 round_trippers.go:577] Response Headers:
	I0807 19:37:53.480701     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:37:53.480701     956 round_trippers.go:580]     Content-Length: 291
	I0807 19:37:53.480701     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:37:53 GMT
	I0807 19:37:53.480742     956 round_trippers.go:580]     Audit-Id: db0d2778-e111-49d3-b484-fff75de4f631
	I0807 19:37:53.480787     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:37:53.480787     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:37:53.480787     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:37:53.480850     956 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"55a49925-8634-46aa-af84-11a4fc7d446a","resourceVersion":"387","creationTimestamp":"2024-08-07T19:37:39Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0807 19:37:53.480850     956 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0807 19:37:53.480931     956 round_trippers.go:577] Response Headers:
	I0807 19:37:53.480931     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:37:53 GMT
	I0807 19:37:53.480981     956 round_trippers.go:580]     Audit-Id: 5b4a6af5-7a2d-4caf-a714-4ee0bb583fdf
	I0807 19:37:53.481051     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:37:53.481051     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:37:53.481115     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:37:53.481115     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:37:53.482155     956 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"55a49925-8634-46aa-af84-11a4fc7d446a","resourceVersion":"387","creationTimestamp":"2024-08-07T19:37:39Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0807 19:37:53.482294     956 round_trippers.go:463] PUT https://172.28.224.86:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0807 19:37:53.482294     956 round_trippers.go:469] Request Headers:
	I0807 19:37:53.482294     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:37:53.482294     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:37:53.482294     956 round_trippers.go:473]     Content-Type: application/json
	I0807 19:37:53.483676     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"365","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0807 19:37:53.531650     956 round_trippers.go:574] Response Status: 200 OK in 49 milliseconds
	I0807 19:37:53.531650     956 round_trippers.go:577] Response Headers:
	I0807 19:37:53.531650     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:37:53.531650     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:37:53.531650     956 round_trippers.go:580]     Content-Length: 291
	I0807 19:37:53.531650     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:37:53 GMT
	I0807 19:37:53.531650     956 round_trippers.go:580]     Audit-Id: 15d106fb-aca9-4e8f-b899-488332f93ca2
	I0807 19:37:53.531650     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:37:53.531650     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:37:53.531650     956 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"55a49925-8634-46aa-af84-11a4fc7d446a","resourceVersion":"402","creationTimestamp":"2024-08-07T19:37:39Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0807 19:37:53.967527     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700
	I0807 19:37:53.967648     956 round_trippers.go:469] Request Headers:
	I0807 19:37:53.967648     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:37:53.967527     956 round_trippers.go:463] GET https://172.28.224.86:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0807 19:37:53.967771     956 round_trippers.go:469] Request Headers:
	I0807 19:37:53.967826     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:37:53.967826     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:37:53.967648     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:37:53.971730     956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 19:37:53.971730     956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 19:37:53.971730     956 round_trippers.go:577] Response Headers:
	I0807 19:37:53.972356     956 round_trippers.go:577] Response Headers:
	I0807 19:37:53.972356     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:37:53 GMT
	I0807 19:37:53.972356     956 round_trippers.go:580]     Audit-Id: 728c7b97-7c6e-4ddf-96e3-c158766a723f
	I0807 19:37:53.972356     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:37:53.972356     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:37:53.972439     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:37:53.972439     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:37:53.972478     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:37:53 GMT
	I0807 19:37:53.972356     956 round_trippers.go:580]     Audit-Id: 5d917ffb-2533-4036-92b1-54e9d99cf24b
	I0807 19:37:53.972589     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:37:53.972685     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:37:53.972685     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:37:53.972816     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"365","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0807 19:37:53.972816     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:37:53.973105     956 round_trippers.go:580]     Content-Length: 291
	I0807 19:37:53.973202     956 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"55a49925-8634-46aa-af84-11a4fc7d446a","resourceVersion":"412","creationTimestamp":"2024-08-07T19:37:39Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0807 19:37:53.973702     956 kapi.go:214] "coredns" deployment in "kube-system" namespace and "multinode-116700" context rescaled to 1 replicas
	I0807 19:37:54.475192     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700
	I0807 19:37:54.475287     956 round_trippers.go:469] Request Headers:
	I0807 19:37:54.475287     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:37:54.475366     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:37:54.496816     956 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I0807 19:37:54.496816     956 round_trippers.go:577] Response Headers:
	I0807 19:37:54.496816     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:37:54.496816     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:37:54.496816     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:37:54.496816     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:37:54 GMT
	I0807 19:37:54.496816     956 round_trippers.go:580]     Audit-Id: 8c03de01-b642-442d-9b3e-d1b83665d8ec
	I0807 19:37:54.496816     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:37:54.497377     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"365","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0807 19:37:54.974206     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700
	I0807 19:37:54.974206     956 round_trippers.go:469] Request Headers:
	I0807 19:37:54.974206     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:37:54.974206     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:37:54.978487     956 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 19:37:54.979347     956 round_trippers.go:577] Response Headers:
	I0807 19:37:54.979347     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:37:54.979347     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:37:54.979347     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:37:54.979442     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:37:54.979442     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:37:54 GMT
	I0807 19:37:54.979442     956 round_trippers.go:580]     Audit-Id: 6f244071-1304-4c88-b25b-ed2537f9eccb
	I0807 19:37:54.979881     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"365","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0807 19:37:55.044615     956 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 19:37:55.044615     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:37:55.048525     956 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0807 19:37:55.051761     956 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0807 19:37:55.051840     956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0807 19:37:55.052046     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700 ).state
	I0807 19:37:55.072458     956 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 19:37:55.072458     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:37:55.073427     956 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0807 19:37:55.074499     956 kapi.go:59] client config for multinode-116700: &rest.Config{Host:"https://172.28.224.86:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-116700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-116700\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1da64c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0807 19:37:55.074499     956 addons.go:234] Setting addon default-storageclass=true in "multinode-116700"
	I0807 19:37:55.075412     956 host.go:66] Checking if "multinode-116700" exists ...
	I0807 19:37:55.076439     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700 ).state
	I0807 19:37:55.466105     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700
	I0807 19:37:55.466105     956 round_trippers.go:469] Request Headers:
	I0807 19:37:55.466105     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:37:55.466200     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:37:55.474555     956 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0807 19:37:55.474614     956 round_trippers.go:577] Response Headers:
	I0807 19:37:55.474685     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:37:55 GMT
	I0807 19:37:55.474724     956 round_trippers.go:580]     Audit-Id: e6987b9d-2e41-4715-8914-62c622c6aa3c
	I0807 19:37:55.474774     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:37:55.474774     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:37:55.474774     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:37:55.474837     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:37:55.475920     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"365","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0807 19:37:55.476662     956 node_ready.go:53] node "multinode-116700" has status "Ready":"False"
	I0807 19:37:55.970130     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700
	I0807 19:37:55.970228     956 round_trippers.go:469] Request Headers:
	I0807 19:37:55.970228     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:37:55.970358     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:37:55.973892     956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 19:37:55.973892     956 round_trippers.go:577] Response Headers:
	I0807 19:37:55.973892     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:37:55.973892     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:37:55.973892     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:37:55 GMT
	I0807 19:37:55.973998     956 round_trippers.go:580]     Audit-Id: 48916a45-6685-4536-b1a2-ab0ccad8be4d
	I0807 19:37:55.973998     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:37:55.974094     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:37:55.974338     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"365","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0807 19:37:56.463553     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700
	I0807 19:37:56.463553     956 round_trippers.go:469] Request Headers:
	I0807 19:37:56.463553     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:37:56.463553     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:37:56.468091     956 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 19:37:56.468091     956 round_trippers.go:577] Response Headers:
	I0807 19:37:56.468091     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:37:56.468091     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:37:56.468091     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:37:56.468091     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:37:56.468091     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:37:56 GMT
	I0807 19:37:56.468091     956 round_trippers.go:580]     Audit-Id: eb134705-72e4-45eb-b644-b313ea02750a
	I0807 19:37:56.468643     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"365","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0807 19:37:56.976256     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700
	I0807 19:37:56.976256     956 round_trippers.go:469] Request Headers:
	I0807 19:37:56.976256     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:37:56.976256     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:37:56.997094     956 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I0807 19:37:56.998206     956 round_trippers.go:577] Response Headers:
	I0807 19:37:56.998288     956 round_trippers.go:580]     Audit-Id: 681e1a61-3d74-4c88-9222-32a0093739c3
	I0807 19:37:56.998478     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:37:56.998706     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:37:56.998706     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:37:56.998810     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:37:56.998922     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:37:57 GMT
	I0807 19:37:57.000051     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"365","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0807 19:37:57.463720     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700
	I0807 19:37:57.463720     956 round_trippers.go:469] Request Headers:
	I0807 19:37:57.463720     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:37:57.463720     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:37:57.466885     956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 19:37:57.466885     956 round_trippers.go:577] Response Headers:
	I0807 19:37:57.466885     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:37:57.466885     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:37:57.466885     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:37:57 GMT
	I0807 19:37:57.466885     956 round_trippers.go:580]     Audit-Id: ee5b0a9c-51a9-49e8-b0cd-83b9cf6e85a2
	I0807 19:37:57.467682     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:37:57.467682     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:37:57.467939     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"365","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0807 19:37:57.565755     956 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 19:37:57.565837     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:37:57.565837     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700 ).networkadapters[0]).ipaddresses[0]
	I0807 19:37:57.780218     956 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 19:37:57.780262     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:37:57.780338     956 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0807 19:37:57.780338     956 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0807 19:37:57.780464     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700 ).state
	I0807 19:37:57.971328     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700
	I0807 19:37:57.971328     956 round_trippers.go:469] Request Headers:
	I0807 19:37:57.971328     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:37:57.971328     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:37:57.976505     956 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 19:37:57.976505     956 round_trippers.go:577] Response Headers:
	I0807 19:37:57.976505     956 round_trippers.go:580]     Audit-Id: 39ec839a-68e4-4ad4-a337-9281cd10a31c
	I0807 19:37:57.976592     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:37:57.976592     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:37:57.976592     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:37:57.976592     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:37:57.976592     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:37:57 GMT
	I0807 19:37:57.976592     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"365","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0807 19:37:57.977523     956 node_ready.go:53] node "multinode-116700" has status "Ready":"False"
	I0807 19:37:58.465711     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700
	I0807 19:37:58.465963     956 round_trippers.go:469] Request Headers:
	I0807 19:37:58.465963     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:37:58.465963     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:37:58.487573     956 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I0807 19:37:58.488486     956 round_trippers.go:577] Response Headers:
	I0807 19:37:58.488486     956 round_trippers.go:580]     Audit-Id: 404ea1b2-443a-4368-a045-5cb5958abfd7
	I0807 19:37:58.488486     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:37:58.488486     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:37:58.488582     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:37:58.488582     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:37:58.488582     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:37:58 GMT
	I0807 19:37:58.492666     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"365","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0807 19:37:58.972375     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700
	I0807 19:37:58.972467     956 round_trippers.go:469] Request Headers:
	I0807 19:37:58.972467     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:37:58.972467     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:37:58.976943     956 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 19:37:58.976943     956 round_trippers.go:577] Response Headers:
	I0807 19:37:58.976943     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:37:58.976943     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:37:58.976943     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:37:58 GMT
	I0807 19:37:58.976943     956 round_trippers.go:580]     Audit-Id: 49f2dd76-f9b3-4ad8-97ba-23396375bb29
	I0807 19:37:58.976943     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:37:58.976943     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:37:58.976943     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"365","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0807 19:37:59.463120     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700
	I0807 19:37:59.463120     956 round_trippers.go:469] Request Headers:
	I0807 19:37:59.463120     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:37:59.463120     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:37:59.467550     956 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 19:37:59.467880     956 round_trippers.go:577] Response Headers:
	I0807 19:37:59.467880     956 round_trippers.go:580]     Audit-Id: 9205db01-227c-4390-9224-e4c2d9bec072
	I0807 19:37:59.467880     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:37:59.467880     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:37:59.467880     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:37:59.467880     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:37:59.467880     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:37:59 GMT
	I0807 19:37:59.467880     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"365","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0807 19:37:59.966455     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700
	I0807 19:37:59.966455     956 round_trippers.go:469] Request Headers:
	I0807 19:37:59.966455     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:37:59.966455     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:37:59.970761     956 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 19:37:59.970761     956 round_trippers.go:577] Response Headers:
	I0807 19:37:59.970761     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:37:59.970761     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:37:59 GMT
	I0807 19:37:59.970876     956 round_trippers.go:580]     Audit-Id: d755a90f-00fa-4085-9da3-a2596780a0bd
	I0807 19:37:59.970876     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:37:59.970952     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:37:59.970981     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:37:59.972193     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"365","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0807 19:38:00.229617     956 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 19:38:00.229617     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:38:00.229617     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700 ).networkadapters[0]).ipaddresses[0]
	I0807 19:38:00.451498     956 main.go:141] libmachine: [stdout =====>] : 172.28.224.86
	
	I0807 19:38:00.451583     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:38:00.451583     956 sshutil.go:53] new ssh client: &{IP:172.28.224.86 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-116700\id_rsa Username:docker}
	I0807 19:38:00.465491     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700
	I0807 19:38:00.465491     956 round_trippers.go:469] Request Headers:
	I0807 19:38:00.465491     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:38:00.465491     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:38:00.469480     956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 19:38:00.469916     956 round_trippers.go:577] Response Headers:
	I0807 19:38:00.469916     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:38:00.469916     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:38:00.469916     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:38:00.469916     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:38:00 GMT
	I0807 19:38:00.469916     956 round_trippers.go:580]     Audit-Id: b1efa84f-bc84-47bc-af13-0ba13cbfa430
	I0807 19:38:00.469916     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:38:00.470178     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"365","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0807 19:38:00.470475     956 node_ready.go:53] node "multinode-116700" has status "Ready":"False"
	I0807 19:38:00.597255     956 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0807 19:38:00.974310     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700
	I0807 19:38:00.974310     956 round_trippers.go:469] Request Headers:
	I0807 19:38:00.974310     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:38:00.974310     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:38:00.977023     956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 19:38:00.977449     956 round_trippers.go:577] Response Headers:
	I0807 19:38:00.977449     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:38:00.977449     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:38:00 GMT
	I0807 19:38:00.977449     956 round_trippers.go:580]     Audit-Id: 3eacb667-2347-4d73-9e9d-6127a3ecf49e
	I0807 19:38:00.977449     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:38:00.977449     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:38:00.977449     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:38:00.977885     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"365","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0807 19:38:01.227683     956 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0807 19:38:01.227998     956 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0807 19:38:01.228095     956 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0807 19:38:01.228095     956 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0807 19:38:01.228095     956 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0807 19:38:01.228160     956 command_runner.go:130] > pod/storage-provisioner created
	I0807 19:38:01.465814     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700
	I0807 19:38:01.465814     956 round_trippers.go:469] Request Headers:
	I0807 19:38:01.466057     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:38:01.466057     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:38:01.738261     956 round_trippers.go:574] Response Status: 200 OK in 272 milliseconds
	I0807 19:38:01.738261     956 round_trippers.go:577] Response Headers:
	I0807 19:38:01.738261     956 round_trippers.go:580]     Audit-Id: d22b0227-0b26-4ff1-bed1-3028135a1995
	I0807 19:38:01.738261     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:38:01.738261     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:38:01.738261     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:38:01.738261     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:38:01.738261     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:38:01 GMT
	I0807 19:38:01.739265     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"365","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0807 19:38:01.976012     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700
	I0807 19:38:01.976012     956 round_trippers.go:469] Request Headers:
	I0807 19:38:01.976012     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:38:01.976012     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:38:01.979654     956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 19:38:01.979814     956 round_trippers.go:577] Response Headers:
	I0807 19:38:01.979814     956 round_trippers.go:580]     Audit-Id: 233f5849-cbec-41a4-93bd-3928b0a9982a
	I0807 19:38:01.979814     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:38:01.979814     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:38:01.979814     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:38:01.979814     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:38:01.979814     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:38:02 GMT
	I0807 19:38:01.980137     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"365","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0807 19:38:02.467704     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700
	I0807 19:38:02.467704     956 round_trippers.go:469] Request Headers:
	I0807 19:38:02.467704     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:38:02.467704     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:38:02.470944     956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 19:38:02.471850     956 round_trippers.go:577] Response Headers:
	I0807 19:38:02.471850     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:38:02.471850     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:38:02 GMT
	I0807 19:38:02.471850     956 round_trippers.go:580]     Audit-Id: 789d67bf-5116-4e8f-a390-8b82c3dc9e10
	I0807 19:38:02.471850     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:38:02.471850     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:38:02.471850     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:38:02.472787     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"365","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0807 19:38:02.473227     956 node_ready.go:53] node "multinode-116700" has status "Ready":"False"
	I0807 19:38:02.845048     956 main.go:141] libmachine: [stdout =====>] : 172.28.224.86
	
	I0807 19:38:02.845048     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:38:02.846369     956 sshutil.go:53] new ssh client: &{IP:172.28.224.86 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-116700\id_rsa Username:docker}
	I0807 19:38:02.973386     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700
	I0807 19:38:02.973457     956 round_trippers.go:469] Request Headers:
	I0807 19:38:02.973457     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:38:02.973556     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:38:02.986418     956 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0807 19:38:02.986418     956 round_trippers.go:577] Response Headers:
	I0807 19:38:02.986418     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:38:02.986418     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:38:02.986418     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:38:02.986418     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:38:02.986418     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:38:03 GMT
	I0807 19:38:02.986418     956 round_trippers.go:580]     Audit-Id: 60cd91b7-9e82-496f-bc69-b29a975ffe08
	I0807 19:38:02.987385     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"365","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0807 19:38:02.989407     956 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0807 19:38:03.157725     956 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0807 19:38:03.158129     956 round_trippers.go:463] GET https://172.28.224.86:8443/apis/storage.k8s.io/v1/storageclasses
	I0807 19:38:03.158129     956 round_trippers.go:469] Request Headers:
	I0807 19:38:03.158172     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:38:03.158172     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:38:03.161349     956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 19:38:03.161349     956 round_trippers.go:577] Response Headers:
	I0807 19:38:03.161349     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:38:03.161349     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:38:03.161349     956 round_trippers.go:580]     Content-Length: 1273
	I0807 19:38:03.161349     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:38:03 GMT
	I0807 19:38:03.161349     956 round_trippers.go:580]     Audit-Id: f0cc057c-3150-477e-a494-0d5ccea1a43b
	I0807 19:38:03.161349     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:38:03.161349     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:38:03.161349     956 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"441"},"items":[{"metadata":{"name":"standard","uid":"f1bb8af5-206c-41ba-8c5d-b0175ca7eb79","resourceVersion":"441","creationTimestamp":"2024-08-07T19:38:03Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-08-07T19:38:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0807 19:38:03.162329     956 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"f1bb8af5-206c-41ba-8c5d-b0175ca7eb79","resourceVersion":"441","creationTimestamp":"2024-08-07T19:38:03Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-08-07T19:38:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0807 19:38:03.162329     956 round_trippers.go:463] PUT https://172.28.224.86:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0807 19:38:03.162329     956 round_trippers.go:469] Request Headers:
	I0807 19:38:03.162329     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:38:03.162329     956 round_trippers.go:473]     Content-Type: application/json
	I0807 19:38:03.162329     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:38:03.165333     956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 19:38:03.165333     956 round_trippers.go:577] Response Headers:
	I0807 19:38:03.165333     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:38:03.165333     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:38:03.165333     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:38:03.165333     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:38:03.165333     956 round_trippers.go:580]     Content-Length: 1220
	I0807 19:38:03.165333     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:38:03 GMT
	I0807 19:38:03.165333     956 round_trippers.go:580]     Audit-Id: 3c9e0141-46cd-4445-8ec0-e5ad2453241a
	I0807 19:38:03.165333     956 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"f1bb8af5-206c-41ba-8c5d-b0175ca7eb79","resourceVersion":"441","creationTimestamp":"2024-08-07T19:38:03Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-08-07T19:38:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0807 19:38:03.169327     956 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0807 19:38:03.178342     956 addons.go:510] duration metric: took 10.5092213s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0807 19:38:03.461816     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700
	I0807 19:38:03.461884     956 round_trippers.go:469] Request Headers:
	I0807 19:38:03.461884     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:38:03.461884     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:38:03.465386     956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 19:38:03.465833     956 round_trippers.go:577] Response Headers:
	I0807 19:38:03.465833     956 round_trippers.go:580]     Audit-Id: a6b4f035-e200-4037-a29d-e35b6cb48a61
	I0807 19:38:03.465833     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:38:03.465833     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:38:03.465833     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:38:03.465833     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:38:03.465833     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:38:03 GMT
	I0807 19:38:03.466183     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"365","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0807 19:38:03.963446     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700
	I0807 19:38:03.963446     956 round_trippers.go:469] Request Headers:
	I0807 19:38:03.963511     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:38:03.963511     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:38:03.966985     956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 19:38:03.966985     956 round_trippers.go:577] Response Headers:
	I0807 19:38:03.966985     956 round_trippers.go:580]     Audit-Id: 69379832-076a-49be-9b4a-fedd3935f5b9
	I0807 19:38:03.966985     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:38:03.966985     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:38:03.966985     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:38:03.966985     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:38:03.966985     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:38:03 GMT
	I0807 19:38:03.968588     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"365","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0807 19:38:04.461512     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700
	I0807 19:38:04.461582     956 round_trippers.go:469] Request Headers:
	I0807 19:38:04.461582     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:38:04.461582     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:38:04.464947     956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 19:38:04.464947     956 round_trippers.go:577] Response Headers:
	I0807 19:38:04.464947     956 round_trippers.go:580]     Audit-Id: 5fc657e5-1ddf-465e-8aa6-4ea6c300321d
	I0807 19:38:04.464947     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:38:04.464947     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:38:04.464947     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:38:04.465644     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:38:04.465644     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:38:04 GMT
	I0807 19:38:04.465976     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"365","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0807 19:38:04.977103     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700
	I0807 19:38:04.977203     956 round_trippers.go:469] Request Headers:
	I0807 19:38:04.977203     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:38:04.977203     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:38:04.980635     956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 19:38:04.980635     956 round_trippers.go:577] Response Headers:
	I0807 19:38:04.980635     956 round_trippers.go:580]     Audit-Id: 63c14043-2c9e-436c-a966-bc27c08fc651
	I0807 19:38:04.980635     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:38:04.981473     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:38:04.981473     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:38:04.981473     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:38:04.981473     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:38:05 GMT
	I0807 19:38:04.981604     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"365","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0807 19:38:04.982240     956 node_ready.go:53] node "multinode-116700" has status "Ready":"False"
	I0807 19:38:05.463942     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700
	I0807 19:38:05.463942     956 round_trippers.go:469] Request Headers:
	I0807 19:38:05.463942     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:38:05.463942     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:38:05.468503     956 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 19:38:05.469198     956 round_trippers.go:577] Response Headers:
	I0807 19:38:05.469198     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:38:05.469198     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:38:05.469198     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:38:05.469198     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:38:05.469198     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:38:05 GMT
	I0807 19:38:05.469198     956 round_trippers.go:580]     Audit-Id: 1aea4e0b-e4d7-4728-b74c-712d4774613b
	I0807 19:38:05.469661     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"365","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0807 19:38:05.961680     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700
	I0807 19:38:05.961788     956 round_trippers.go:469] Request Headers:
	I0807 19:38:05.961788     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:38:05.961788     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:38:05.964746     956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 19:38:05.964746     956 round_trippers.go:577] Response Headers:
	I0807 19:38:05.964746     956 round_trippers.go:580]     Audit-Id: 8fee5e18-c3fa-4f14-b7bd-78411a41c6c5
	I0807 19:38:05.964746     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:38:05.964746     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:38:05.964746     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:38:05.964746     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:38:05.964746     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:38:05 GMT
	I0807 19:38:05.965567     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"365","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0807 19:38:06.475618     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700
	I0807 19:38:06.475737     956 round_trippers.go:469] Request Headers:
	I0807 19:38:06.475737     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:38:06.475737     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:38:06.480048     956 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 19:38:06.480135     956 round_trippers.go:577] Response Headers:
	I0807 19:38:06.480135     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:38:06.480135     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:38:06.480135     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:38:06 GMT
	I0807 19:38:06.480135     956 round_trippers.go:580]     Audit-Id: b44e63fa-79d8-4305-91d2-3062628deb62
	I0807 19:38:06.480254     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:38:06.480254     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:38:06.480946     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"365","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0807 19:38:06.976284     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700
	I0807 19:38:06.976628     956 round_trippers.go:469] Request Headers:
	I0807 19:38:06.976628     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:38:06.976628     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:38:06.979265     956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 19:38:06.979265     956 round_trippers.go:577] Response Headers:
	I0807 19:38:06.979265     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:38:06.979265     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:38:06.979265     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:38:06.979265     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:38:06.980052     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:38:07 GMT
	I0807 19:38:06.980052     956 round_trippers.go:580]     Audit-Id: 1ad3b088-78a1-4191-a134-580c6a64266b
	I0807 19:38:06.981105     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"365","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0807 19:38:07.475251     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700
	I0807 19:38:07.475251     956 round_trippers.go:469] Request Headers:
	I0807 19:38:07.475251     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:38:07.475251     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:38:07.478638     956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 19:38:07.478638     956 round_trippers.go:577] Response Headers:
	I0807 19:38:07.478638     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:38:07 GMT
	I0807 19:38:07.478638     956 round_trippers.go:580]     Audit-Id: e2a20866-4c1c-455a-9a6e-518b982053ab
	I0807 19:38:07.478638     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:38:07.478638     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:38:07.478638     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:38:07.478638     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:38:07.479838     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"365","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0807 19:38:07.479838     956 node_ready.go:53] node "multinode-116700" has status "Ready":"False"
	I0807 19:38:07.972666     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700
	I0807 19:38:07.972666     956 round_trippers.go:469] Request Headers:
	I0807 19:38:07.972666     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:38:07.972666     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:38:07.976665     956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 19:38:07.976665     956 round_trippers.go:577] Response Headers:
	I0807 19:38:07.976665     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:38:07.977194     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:38:07 GMT
	I0807 19:38:07.977194     956 round_trippers.go:580]     Audit-Id: ac790726-1440-48c2-adb5-89e3a16a882d
	I0807 19:38:07.977194     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:38:07.977194     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:38:07.977194     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:38:07.977526     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"365","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0807 19:38:08.473667     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700
	I0807 19:38:08.473667     956 round_trippers.go:469] Request Headers:
	I0807 19:38:08.473800     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:38:08.473800     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:38:08.477564     956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 19:38:08.477564     956 round_trippers.go:577] Response Headers:
	I0807 19:38:08.477564     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:38:08 GMT
	I0807 19:38:08.478370     956 round_trippers.go:580]     Audit-Id: dd7eba95-4d8f-4ab8-9b4a-6cadcae9581d
	I0807 19:38:08.478370     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:38:08.478370     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:38:08.478370     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:38:08.478370     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:38:08.478631     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"365","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0807 19:38:08.974050     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700
	I0807 19:38:08.974050     956 round_trippers.go:469] Request Headers:
	I0807 19:38:08.974050     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:38:08.974050     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:38:08.978143     956 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 19:38:08.978143     956 round_trippers.go:577] Response Headers:
	I0807 19:38:08.978143     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:38:08.978143     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:38:08 GMT
	I0807 19:38:08.978143     956 round_trippers.go:580]     Audit-Id: fba5a4be-50bb-41b1-83d1-717cf40567db
	I0807 19:38:08.978143     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:38:08.978143     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:38:08.978143     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:38:08.978143     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"365","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0807 19:38:09.473868     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700
	I0807 19:38:09.473936     956 round_trippers.go:469] Request Headers:
	I0807 19:38:09.473936     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:38:09.473936     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:38:09.477272     956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 19:38:09.477701     956 round_trippers.go:577] Response Headers:
	I0807 19:38:09.477701     956 round_trippers.go:580]     Audit-Id: a3f02bfc-0963-47d7-9c3f-5b1b456a0d65
	I0807 19:38:09.477701     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:38:09.477900     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:38:09.477900     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:38:09.477900     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:38:09.477900     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:38:09 GMT
	I0807 19:38:09.478209     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"365","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0807 19:38:09.962581     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700
	I0807 19:38:09.962581     956 round_trippers.go:469] Request Headers:
	I0807 19:38:09.962709     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:38:09.962709     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:38:09.967030     956 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 19:38:09.967030     956 round_trippers.go:577] Response Headers:
	I0807 19:38:09.967030     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:38:09.967030     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:38:09.967030     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:38:09.967030     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:38:09.967030     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:38:09 GMT
	I0807 19:38:09.967030     956 round_trippers.go:580]     Audit-Id: a584c970-3a0c-4106-a802-30802674dd81
	I0807 19:38:09.968088     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"365","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0807 19:38:09.968817     956 node_ready.go:53] node "multinode-116700" has status "Ready":"False"
	I0807 19:38:10.462894     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700
	I0807 19:38:10.462894     956 round_trippers.go:469] Request Headers:
	I0807 19:38:10.462894     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:38:10.462894     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:38:10.467624     956 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 19:38:10.467624     956 round_trippers.go:577] Response Headers:
	I0807 19:38:10.467624     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:38:10.467624     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:38:10 GMT
	I0807 19:38:10.467906     956 round_trippers.go:580]     Audit-Id: dd0f3394-6000-471a-9f28-ac970ef9e39f
	I0807 19:38:10.467906     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:38:10.467906     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:38:10.467906     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:38:10.468128     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"444","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0807 19:38:10.962499     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700
	I0807 19:38:10.962886     956 round_trippers.go:469] Request Headers:
	I0807 19:38:10.962886     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:38:10.962886     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:38:10.967396     956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 19:38:10.967396     956 round_trippers.go:577] Response Headers:
	I0807 19:38:10.967396     956 round_trippers.go:580]     Audit-Id: dec96949-7fc4-4d69-9d80-6a0c414f5f4a
	I0807 19:38:10.967396     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:38:10.967396     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:38:10.967396     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:38:10.967396     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:38:10.967396     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:38:10 GMT
	I0807 19:38:10.968112     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"444","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0807 19:38:11.476178     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700
	I0807 19:38:11.476254     956 round_trippers.go:469] Request Headers:
	I0807 19:38:11.476254     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:38:11.476254     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:38:11.480045     956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 19:38:11.480045     956 round_trippers.go:577] Response Headers:
	I0807 19:38:11.480045     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:38:11.480045     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:38:11.480045     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:38:11.480045     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:38:11.480045     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:38:11 GMT
	I0807 19:38:11.480045     956 round_trippers.go:580]     Audit-Id: 03c5d717-224e-4f46-abe9-31c4679f7563
	I0807 19:38:11.480933     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"444","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0807 19:38:11.974173     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700
	I0807 19:38:11.974542     956 round_trippers.go:469] Request Headers:
	I0807 19:38:11.974542     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:38:11.974634     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:38:11.978994     956 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 19:38:11.978994     956 round_trippers.go:577] Response Headers:
	I0807 19:38:11.978994     956 round_trippers.go:580]     Audit-Id: e0bbfdf0-86db-4b3b-acb9-d55999b6d20b
	I0807 19:38:11.978994     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:38:11.978994     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:38:11.978994     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:38:11.978994     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:38:11.978994     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:38:11 GMT
	I0807 19:38:11.979609     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"444","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0807 19:38:11.980166     956 node_ready.go:53] node "multinode-116700" has status "Ready":"False"
	I0807 19:38:12.472574     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700
	I0807 19:38:12.472639     956 round_trippers.go:469] Request Headers:
	I0807 19:38:12.472639     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:38:12.472639     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:38:12.481427     956 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0807 19:38:12.481997     956 round_trippers.go:577] Response Headers:
	I0807 19:38:12.482053     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:38:12.482053     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:38:12.482053     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:38:12.482053     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:38:12.482092     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:38:12 GMT
	I0807 19:38:12.482092     956 round_trippers.go:580]     Audit-Id: 60d7cb4c-2c1b-4667-a0d7-8a5e3df72123
	I0807 19:38:12.483359     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"444","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0807 19:38:12.968988     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700
	I0807 19:38:12.969133     956 round_trippers.go:469] Request Headers:
	I0807 19:38:12.969133     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:38:12.969200     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:38:12.972571     956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 19:38:12.972571     956 round_trippers.go:577] Response Headers:
	I0807 19:38:12.972571     956 round_trippers.go:580]     Audit-Id: 743849ec-3c99-475b-9aeb-a9625decd2c7
	I0807 19:38:12.972571     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:38:12.972571     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:38:12.972571     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:38:12.972571     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:38:12.972571     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:38:12 GMT
	I0807 19:38:12.973523     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"444","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0807 19:38:13.465782     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700
	I0807 19:38:13.466011     956 round_trippers.go:469] Request Headers:
	I0807 19:38:13.466011     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:38:13.466011     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:38:13.472274     956 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0807 19:38:13.472274     956 round_trippers.go:577] Response Headers:
	I0807 19:38:13.472274     956 round_trippers.go:580]     Audit-Id: 728e0ba0-8b7b-4816-a066-aa6715579f9d
	I0807 19:38:13.472274     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:38:13.472274     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:38:13.472621     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:38:13.472621     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:38:13.472621     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:38:13 GMT
	I0807 19:38:13.473100     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"447","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","fi [truncated 5031 chars]
	I0807 19:38:13.473592     956 node_ready.go:49] node "multinode-116700" has status "Ready":"True"
	I0807 19:38:13.473649     956 node_ready.go:38] duration metric: took 20.0129168s for node "multinode-116700" to be "Ready" ...
	I0807 19:38:13.473707     956 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0807 19:38:13.473765     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/namespaces/kube-system/pods
	I0807 19:38:13.473836     956 round_trippers.go:469] Request Headers:
	I0807 19:38:13.473855     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:38:13.473855     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:38:13.477295     956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 19:38:13.478100     956 round_trippers.go:577] Response Headers:
	I0807 19:38:13.478100     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:38:13.478100     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:38:13 GMT
	I0807 19:38:13.478100     956 round_trippers.go:580]     Audit-Id: bbed5804-bf09-4ec9-b6ee-791d88c1236f
	I0807 19:38:13.478100     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:38:13.478100     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:38:13.478100     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:38:13.479412     956 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"447"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-7l6v2","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7de73f9c-93d9-46c6-ae10-b253dd257a19","resourceVersion":"399","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"26f3775d-afda-4408-b967-dc333bdd23fc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26f3775d-afda-4408-b967-dc333bdd23fc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53147 chars]
	I0807 19:38:13.482992     956 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-7l6v2" in "kube-system" namespace to be "Ready" ...
	I0807 19:38:13.483989     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7l6v2
	I0807 19:38:13.483989     956 round_trippers.go:469] Request Headers:
	I0807 19:38:13.483989     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:38:13.483989     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:38:13.493139     956 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0807 19:38:13.493701     956 round_trippers.go:577] Response Headers:
	I0807 19:38:13.494339     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:38:13 GMT
	I0807 19:38:13.494339     956 round_trippers.go:580]     Audit-Id: f3341b8c-6b7d-4021-9e3f-ccdcbb24b406
	I0807 19:38:13.494339     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:38:13.494420     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:38:13.494420     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:38:13.494420     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:38:13.494682     956 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7l6v2","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7de73f9c-93d9-46c6-ae10-b253dd257a19","resourceVersion":"399","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"26f3775d-afda-4408-b967-dc333bdd23fc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26f3775d-afda-4408-b967-dc333bdd23fc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 4942 chars]
	I0807 19:38:13.985282     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7l6v2
	I0807 19:38:13.985282     956 round_trippers.go:469] Request Headers:
	I0807 19:38:13.985349     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:38:13.985349     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:38:13.993952     956 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0807 19:38:13.994264     956 round_trippers.go:577] Response Headers:
	I0807 19:38:13.994264     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:38:13.994264     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:38:13.994264     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:38:13.994264     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:38:13.994264     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:38:14 GMT
	I0807 19:38:13.994264     956 round_trippers.go:580]     Audit-Id: a03894e8-e911-49ee-a11c-6df5d51663b7
	I0807 19:38:13.994526     956 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7l6v2","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7de73f9c-93d9-46c6-ae10-b253dd257a19","resourceVersion":"453","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"26f3775d-afda-4408-b967-dc333bdd23fc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26f3775d-afda-4408-b967-dc333bdd23fc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0807 19:38:13.995568     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700
	I0807 19:38:13.995629     956 round_trippers.go:469] Request Headers:
	I0807 19:38:13.995629     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:38:13.995629     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:38:14.021174     956 round_trippers.go:574] Response Status: 200 OK in 24 milliseconds
	I0807 19:38:14.021174     956 round_trippers.go:577] Response Headers:
	I0807 19:38:14.021174     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:38:14 GMT
	I0807 19:38:14.021174     956 round_trippers.go:580]     Audit-Id: e3b8bf3d-d2c0-419c-aff6-83d0dff63544
	I0807 19:38:14.021174     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:38:14.021174     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:38:14.021248     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:38:14.021248     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:38:14.021464     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"448","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0807 19:38:14.494251     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7l6v2
	I0807 19:38:14.494311     956 round_trippers.go:469] Request Headers:
	I0807 19:38:14.494311     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:38:14.494311     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:38:14.497694     956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 19:38:14.498602     956 round_trippers.go:577] Response Headers:
	I0807 19:38:14.498602     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:38:14.498602     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:38:14.498602     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:38:14 GMT
	I0807 19:38:14.498602     956 round_trippers.go:580]     Audit-Id: c564028f-09dd-48cd-b2ae-c9fbdfadf776
	I0807 19:38:14.498602     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:38:14.498674     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:38:14.498795     956 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7l6v2","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7de73f9c-93d9-46c6-ae10-b253dd257a19","resourceVersion":"453","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"26f3775d-afda-4408-b967-dc333bdd23fc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26f3775d-afda-4408-b967-dc333bdd23fc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0807 19:38:14.499553     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700
	I0807 19:38:14.499553     956 round_trippers.go:469] Request Headers:
	I0807 19:38:14.499553     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:38:14.499553     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:38:14.502868     956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 19:38:14.502868     956 round_trippers.go:577] Response Headers:
	I0807 19:38:14.503616     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:38:14.503616     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:38:14 GMT
	I0807 19:38:14.503616     956 round_trippers.go:580]     Audit-Id: a3883e70-1976-4b06-bbb0-64057329600e
	I0807 19:38:14.503616     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:38:14.503616     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:38:14.503616     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:38:14.504122     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"448","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0807 19:38:14.983832     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7l6v2
	I0807 19:38:14.983832     956 round_trippers.go:469] Request Headers:
	I0807 19:38:14.984033     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:38:14.984033     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:38:14.986497     956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 19:38:14.986497     956 round_trippers.go:577] Response Headers:
	I0807 19:38:14.986497     956 round_trippers.go:580]     Audit-Id: bb9b55aa-b003-4dec-83e6-de81d68ab89c
	I0807 19:38:14.986497     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:38:14.986497     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:38:14.986497     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:38:14.986497     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:38:14.986497     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:38:15 GMT
	I0807 19:38:14.987518     956 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7l6v2","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7de73f9c-93d9-46c6-ae10-b253dd257a19","resourceVersion":"453","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"26f3775d-afda-4408-b967-dc333bdd23fc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26f3775d-afda-4408-b967-dc333bdd23fc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0807 19:38:14.987518     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700
	I0807 19:38:14.987518     956 round_trippers.go:469] Request Headers:
	I0807 19:38:14.987518     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:38:14.988503     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:38:14.990490     956 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0807 19:38:14.990490     956 round_trippers.go:577] Response Headers:
	I0807 19:38:14.990490     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:38:15 GMT
	I0807 19:38:14.990490     956 round_trippers.go:580]     Audit-Id: 84cba313-02ad-470c-866a-f65e9489b923
	I0807 19:38:14.990490     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:38:14.990490     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:38:14.990490     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:38:14.990490     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:38:14.991499     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"448","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0807 19:38:15.486931     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7l6v2
	I0807 19:38:15.487003     956 round_trippers.go:469] Request Headers:
	I0807 19:38:15.487061     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:38:15.487061     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:38:15.489878     956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 19:38:15.489878     956 round_trippers.go:577] Response Headers:
	I0807 19:38:15.489878     956 round_trippers.go:580]     Audit-Id: 32fd7f45-0394-402b-92fc-0779696171d4
	I0807 19:38:15.489878     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:38:15.489878     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:38:15.489878     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:38:15.489878     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:38:15.489878     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:38:15 GMT
	I0807 19:38:15.491879     956 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7l6v2","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7de73f9c-93d9-46c6-ae10-b253dd257a19","resourceVersion":"453","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"26f3775d-afda-4408-b967-dc333bdd23fc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26f3775d-afda-4408-b967-dc333bdd23fc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0807 19:38:15.492838     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700
	I0807 19:38:15.492838     956 round_trippers.go:469] Request Headers:
	I0807 19:38:15.492838     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:38:15.492838     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:38:15.496444     956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 19:38:15.496444     956 round_trippers.go:577] Response Headers:
	I0807 19:38:15.496444     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:38:15.496444     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:38:15 GMT
	I0807 19:38:15.496444     956 round_trippers.go:580]     Audit-Id: b860cb4a-6e02-4af3-b894-9da4c6651774
	I0807 19:38:15.496444     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:38:15.496444     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:38:15.496444     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:38:15.497449     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"448","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0807 19:38:15.497449     956 pod_ready.go:102] pod "coredns-7db6d8ff4d-7l6v2" in "kube-system" namespace has status "Ready":"False"
	I0807 19:38:15.990128     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7l6v2
	I0807 19:38:15.990337     956 round_trippers.go:469] Request Headers:
	I0807 19:38:15.990337     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:38:15.990337     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:38:15.993738     956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 19:38:15.994689     956 round_trippers.go:577] Response Headers:
	I0807 19:38:15.994689     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:38:15.994689     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:38:15.994689     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:38:15.994689     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:38:16 GMT
	I0807 19:38:15.994689     956 round_trippers.go:580]     Audit-Id: a40e9fbb-0122-44a9-b9e5-b2cc321611d0
	I0807 19:38:15.994689     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:38:15.995131     956 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7l6v2","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7de73f9c-93d9-46c6-ae10-b253dd257a19","resourceVersion":"467","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"26f3775d-afda-4408-b967-dc333bdd23fc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26f3775d-afda-4408-b967-dc333bdd23fc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6578 chars]
	I0807 19:38:15.996015     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700
	I0807 19:38:15.996077     956 round_trippers.go:469] Request Headers:
	I0807 19:38:15.996077     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:38:15.996077     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:38:16.001027     956 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 19:38:16.001027     956 round_trippers.go:577] Response Headers:
	I0807 19:38:16.001027     956 round_trippers.go:580]     Audit-Id: d30df620-4119-4be2-ad0a-baca6c35a3f4
	I0807 19:38:16.001027     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:38:16.001027     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:38:16.001027     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:38:16.001027     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:38:16.001027     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:38:16 GMT
	I0807 19:38:16.001863     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"448","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0807 19:38:16.002407     956 pod_ready.go:92] pod "coredns-7db6d8ff4d-7l6v2" in "kube-system" namespace has status "Ready":"True"
	I0807 19:38:16.002407     956 pod_ready.go:81] duration metric: took 2.5193828s for pod "coredns-7db6d8ff4d-7l6v2" in "kube-system" namespace to be "Ready" ...
	I0807 19:38:16.002529     956 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-116700" in "kube-system" namespace to be "Ready" ...
	I0807 19:38:16.002624     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-116700
	I0807 19:38:16.002624     956 round_trippers.go:469] Request Headers:
	I0807 19:38:16.002624     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:38:16.002624     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:38:16.005202     956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 19:38:16.005202     956 round_trippers.go:577] Response Headers:
	I0807 19:38:16.005202     956 round_trippers.go:580]     Audit-Id: 611f6c3e-61a2-4619-b258-a5831228686c
	I0807 19:38:16.005202     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:38:16.005202     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:38:16.005202     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:38:16.005202     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:38:16.005202     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:38:16 GMT
	I0807 19:38:16.005202     956 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-116700","namespace":"kube-system","uid":"fbae8778-c573-4d9b-a21e-e5fcb236586e","resourceVersion":"425","creationTimestamp":"2024-08-07T19:37:39Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.28.224.86:2379","kubernetes.io/config.hash":"7ac46b48ad876a3a598d6eacbc5ad1fe","kubernetes.io/config.mirror":"7ac46b48ad876a3a598d6eacbc5ad1fe","kubernetes.io/config.seen":"2024-08-07T19:37:39.552052160Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6159 chars]
	I0807 19:38:16.005202     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700
	I0807 19:38:16.005202     956 round_trippers.go:469] Request Headers:
	I0807 19:38:16.005202     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:38:16.005202     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:38:16.008759     956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 19:38:16.008759     956 round_trippers.go:577] Response Headers:
	I0807 19:38:16.008759     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:38:16.008759     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:38:16.008759     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:38:16 GMT
	I0807 19:38:16.008759     956 round_trippers.go:580]     Audit-Id: 3de8ccfc-84f1-43f7-8507-76cc809e19f6
	I0807 19:38:16.008759     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:38:16.008759     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:38:16.008759     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"448","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0807 19:38:16.008759     956 pod_ready.go:92] pod "etcd-multinode-116700" in "kube-system" namespace has status "Ready":"True"
	I0807 19:38:16.008759     956 pod_ready.go:81] duration metric: took 6.229ms for pod "etcd-multinode-116700" in "kube-system" namespace to be "Ready" ...
	I0807 19:38:16.008759     956 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-116700" in "kube-system" namespace to be "Ready" ...
	I0807 19:38:16.008759     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-116700
	I0807 19:38:16.008759     956 round_trippers.go:469] Request Headers:
	I0807 19:38:16.008759     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:38:16.008759     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:38:16.012114     956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 19:38:16.012114     956 round_trippers.go:577] Response Headers:
	I0807 19:38:16.012114     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:38:16.012114     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:38:16.012114     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:38:16 GMT
	I0807 19:38:16.012114     956 round_trippers.go:580]     Audit-Id: 40cf2461-0ca7-487c-b045-20ed9b0cca93
	I0807 19:38:16.012114     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:38:16.012114     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:38:16.012934     956 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-116700","namespace":"kube-system","uid":"6a7e36c1-9e53-4565-9998-c5bbbb1ea060","resourceVersion":"426","creationTimestamp":"2024-08-07T19:37:38Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.28.224.86:8443","kubernetes.io/config.hash":"f7d89d0655264a3dfa6358b49d3d5f42","kubernetes.io/config.mirror":"f7d89d0655264a3dfa6358b49d3d5f42","kubernetes.io/config.seen":"2024-08-07T19:37:31.050588290Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7694 chars]
	I0807 19:38:16.013549     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700
	I0807 19:38:16.013549     956 round_trippers.go:469] Request Headers:
	I0807 19:38:16.013549     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:38:16.013549     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:38:16.015167     956 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0807 19:38:16.015167     956 round_trippers.go:577] Response Headers:
	I0807 19:38:16.015167     956 round_trippers.go:580]     Audit-Id: 1af18ad7-278e-4ff2-9b5d-47565e2120ff
	I0807 19:38:16.015167     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:38:16.015167     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:38:16.015167     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:38:16.015167     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:38:16.015167     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:38:16 GMT
	I0807 19:38:16.016174     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"448","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0807 19:38:16.016174     956 pod_ready.go:92] pod "kube-apiserver-multinode-116700" in "kube-system" namespace has status "Ready":"True"
	I0807 19:38:16.016174     956 pod_ready.go:81] duration metric: took 7.4157ms for pod "kube-apiserver-multinode-116700" in "kube-system" namespace to be "Ready" ...
	I0807 19:38:16.016174     956 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-116700" in "kube-system" namespace to be "Ready" ...
	I0807 19:38:16.016174     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-116700
	I0807 19:38:16.016174     956 round_trippers.go:469] Request Headers:
	I0807 19:38:16.016174     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:38:16.016174     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:38:16.019122     956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 19:38:16.019698     956 round_trippers.go:577] Response Headers:
	I0807 19:38:16.019698     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:38:16.019698     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:38:16.019698     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:38:16 GMT
	I0807 19:38:16.019698     956 round_trippers.go:580]     Audit-Id: b4ca1d03-e076-443a-a1fd-6da4b10bbb31
	I0807 19:38:16.019698     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:38:16.019698     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:38:16.020178     956 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-116700","namespace":"kube-system","uid":"4d2e8250-9b12-4277-8834-515c1621fc78","resourceVersion":"423","creationTimestamp":"2024-08-07T19:37:39Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ef62d358a9b469de2443e4a4f620921d","kubernetes.io/config.mirror":"ef62d358a9b469de2443e4a4f620921d","kubernetes.io/config.seen":"2024-08-07T19:37:39.552053960Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7264 chars]
	I0807 19:38:16.020695     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700
	I0807 19:38:16.020768     956 round_trippers.go:469] Request Headers:
	I0807 19:38:16.020768     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:38:16.020768     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:38:16.023128     956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 19:38:16.023627     956 round_trippers.go:577] Response Headers:
	I0807 19:38:16.023691     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:38:16.023691     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:38:16.023691     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:38:16 GMT
	I0807 19:38:16.023691     956 round_trippers.go:580]     Audit-Id: 3b10e3fd-49a9-4873-82d3-925c91fbedf7
	I0807 19:38:16.023691     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:38:16.023691     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:38:16.023798     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"448","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0807 19:38:16.024274     956 pod_ready.go:92] pod "kube-controller-manager-multinode-116700" in "kube-system" namespace has status "Ready":"True"
	I0807 19:38:16.024451     956 pod_ready.go:81] duration metric: took 8.2762ms for pod "kube-controller-manager-multinode-116700" in "kube-system" namespace to be "Ready" ...
	I0807 19:38:16.024451     956 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fmjt9" in "kube-system" namespace to be "Ready" ...
	I0807 19:38:16.024626     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fmjt9
	I0807 19:38:16.024675     956 round_trippers.go:469] Request Headers:
	I0807 19:38:16.024675     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:38:16.024706     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:38:16.026715     956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 19:38:16.026715     956 round_trippers.go:577] Response Headers:
	I0807 19:38:16.026715     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:38:16.026715     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:38:16.026715     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:38:16 GMT
	I0807 19:38:16.026715     956 round_trippers.go:580]     Audit-Id: 9338ffa0-af2c-4317-9603-8ef258cae2dc
	I0807 19:38:16.026715     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:38:16.026715     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:38:16.027707     956 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-fmjt9","generateName":"kube-proxy-","namespace":"kube-system","uid":"766df91e-8fd0-457b-8c11-8810059ca4d9","resourceVersion":"419","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d82856c0-3330-4ab9-b7bf-54ed48646bce","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d82856c0-3330-4ab9-b7bf-54ed48646bce\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5828 chars]
	I0807 19:38:16.028724     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700
	I0807 19:38:16.028724     956 round_trippers.go:469] Request Headers:
	I0807 19:38:16.028724     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:38:16.028724     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:38:16.030533     956 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0807 19:38:16.030533     956 round_trippers.go:577] Response Headers:
	I0807 19:38:16.030533     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:38:16.030533     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:38:16.030533     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:38:16 GMT
	I0807 19:38:16.030533     956 round_trippers.go:580]     Audit-Id: cb509f92-4dee-495d-a0ae-f602f2581e65
	I0807 19:38:16.030533     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:38:16.030533     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:38:16.031633     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"448","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0807 19:38:16.032172     956 pod_ready.go:92] pod "kube-proxy-fmjt9" in "kube-system" namespace has status "Ready":"True"
	I0807 19:38:16.032172     956 pod_ready.go:81] duration metric: took 7.7212ms for pod "kube-proxy-fmjt9" in "kube-system" namespace to be "Ready" ...
	I0807 19:38:16.032172     956 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-116700" in "kube-system" namespace to be "Ready" ...
	I0807 19:38:16.192891     956 request.go:629] Waited for 160.4167ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.86:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-116700
	I0807 19:38:16.193016     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-116700
	I0807 19:38:16.193044     956 round_trippers.go:469] Request Headers:
	I0807 19:38:16.193044     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:38:16.193044     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:38:16.197228     956 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 19:38:16.197228     956 round_trippers.go:577] Response Headers:
	I0807 19:38:16.197228     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:38:16.197228     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:38:16.197228     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:38:16 GMT
	I0807 19:38:16.197366     956 round_trippers.go:580]     Audit-Id: 190153cf-7344-42a1-a58d-1f178a91b013
	I0807 19:38:16.197366     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:38:16.197366     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:38:16.197664     956 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-116700","namespace":"kube-system","uid":"7b6df7b7-8c94-498a-bc4c-74d72efd572a","resourceVersion":"424","creationTimestamp":"2024-08-07T19:37:39Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"fde91c95fce6faff219ccfa4b0b2484c","kubernetes.io/config.mirror":"fde91c95fce6faff219ccfa4b0b2484c","kubernetes.io/config.seen":"2024-08-07T19:37:39.552047359Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4994 chars]
	I0807 19:38:16.394894     956 request.go:629] Waited for 196.4762ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.86:8443/api/v1/nodes/multinode-116700
	I0807 19:38:16.395153     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700
	I0807 19:38:16.395153     956 round_trippers.go:469] Request Headers:
	I0807 19:38:16.395153     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:38:16.395153     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:38:16.398342     956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 19:38:16.398633     956 round_trippers.go:577] Response Headers:
	I0807 19:38:16.398633     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:38:16.398633     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:38:16.398633     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:38:16.398633     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:38:16 GMT
	I0807 19:38:16.398633     956 round_trippers.go:580]     Audit-Id: 1675b03d-8789-488b-baf8-8c9d1bae8094
	I0807 19:38:16.398633     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:38:16.398633     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"448","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0807 19:38:16.399611     956 pod_ready.go:92] pod "kube-scheduler-multinode-116700" in "kube-system" namespace has status "Ready":"True"
	I0807 19:38:16.399740     956 pod_ready.go:81] duration metric: took 367.5636ms for pod "kube-scheduler-multinode-116700" in "kube-system" namespace to be "Ready" ...
	I0807 19:38:16.399740     956 pod_ready.go:38] duration metric: took 2.9259956s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0807 19:38:16.399879     956 api_server.go:52] waiting for apiserver process to appear ...
	I0807 19:38:16.412185     956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0807 19:38:16.444573     956 command_runner.go:130] > 2151
	I0807 19:38:16.444573     956 api_server.go:72] duration metric: took 23.7752821s to wait for apiserver process to appear ...
	I0807 19:38:16.444573     956 api_server.go:88] waiting for apiserver healthz status ...
	I0807 19:38:16.444573     956 api_server.go:253] Checking apiserver healthz at https://172.28.224.86:8443/healthz ...
	I0807 19:38:16.452433     956 api_server.go:279] https://172.28.224.86:8443/healthz returned 200:
	ok
	I0807 19:38:16.453283     956 round_trippers.go:463] GET https://172.28.224.86:8443/version
	I0807 19:38:16.453283     956 round_trippers.go:469] Request Headers:
	I0807 19:38:16.453283     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:38:16.453283     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:38:16.454631     956 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0807 19:38:16.454631     956 round_trippers.go:577] Response Headers:
	I0807 19:38:16.454631     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:38:16 GMT
	I0807 19:38:16.454631     956 round_trippers.go:580]     Audit-Id: 897ea046-06f4-4533-82fb-289d6694c9fb
	I0807 19:38:16.454631     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:38:16.454631     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:38:16.454631     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:38:16.455291     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:38:16.455291     956 round_trippers.go:580]     Content-Length: 263
	I0807 19:38:16.455350     956 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.3",
	  "gitCommit": "6fc0a69044f1ac4c13841ec4391224a2df241460",
	  "gitTreeState": "clean",
	  "buildDate": "2024-07-16T23:48:12Z",
	  "goVersion": "go1.22.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0807 19:38:16.455393     956 api_server.go:141] control plane version: v1.30.3
	I0807 19:38:16.455393     956 api_server.go:131] duration metric: took 10.8198ms to wait for apiserver health ...
	I0807 19:38:16.455393     956 system_pods.go:43] waiting for kube-system pods to appear ...
	I0807 19:38:16.595916     956 request.go:629] Waited for 140.2374ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.86:8443/api/v1/namespaces/kube-system/pods
	I0807 19:38:16.596238     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/namespaces/kube-system/pods
	I0807 19:38:16.596315     956 round_trippers.go:469] Request Headers:
	I0807 19:38:16.596315     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:38:16.596315     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:38:16.600646     956 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 19:38:16.600646     956 round_trippers.go:577] Response Headers:
	I0807 19:38:16.600646     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:38:16.600646     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:38:16 GMT
	I0807 19:38:16.600646     956 round_trippers.go:580]     Audit-Id: 205eaff3-b013-4804-9dd8-167e6ac8cbbc
	I0807 19:38:16.600646     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:38:16.600646     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:38:16.601652     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:38:16.602725     956 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"471"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-7l6v2","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7de73f9c-93d9-46c6-ae10-b253dd257a19","resourceVersion":"467","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"26f3775d-afda-4408-b967-dc333bdd23fc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26f3775d-afda-4408-b967-dc333bdd23fc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56451 chars]
	I0807 19:38:16.605855     956 system_pods.go:59] 8 kube-system pods found
	I0807 19:38:16.605855     956 system_pods.go:61] "coredns-7db6d8ff4d-7l6v2" [7de73f9c-93d9-46c6-ae10-b253dd257a19] Running
	I0807 19:38:16.605855     956 system_pods.go:61] "etcd-multinode-116700" [fbae8778-c573-4d9b-a21e-e5fcb236586e] Running
	I0807 19:38:16.605855     956 system_pods.go:61] "kindnet-kltmx" [b2ddfdd4-b957-45e3-b967-cf2650e86069] Running
	I0807 19:38:16.605855     956 system_pods.go:61] "kube-apiserver-multinode-116700" [6a7e36c1-9e53-4565-9998-c5bbbb1ea060] Running
	I0807 19:38:16.605855     956 system_pods.go:61] "kube-controller-manager-multinode-116700" [4d2e8250-9b12-4277-8834-515c1621fc78] Running
	I0807 19:38:16.605855     956 system_pods.go:61] "kube-proxy-fmjt9" [766df91e-8fd0-457b-8c11-8810059ca4d9] Running
	I0807 19:38:16.605964     956 system_pods.go:61] "kube-scheduler-multinode-116700" [7b6df7b7-8c94-498a-bc4c-74d72efd572a] Running
	I0807 19:38:16.605964     956 system_pods.go:61] "storage-provisioner" [8a8036f6-f1a0-4fca-b8dd-ed99c3535b47] Running
	I0807 19:38:16.605964     956 system_pods.go:74] duration metric: took 150.57ms to wait for pod list to return data ...
	I0807 19:38:16.605964     956 default_sa.go:34] waiting for default service account to be created ...
	I0807 19:38:16.798709     956 request.go:629] Waited for 192.0085ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.86:8443/api/v1/namespaces/default/serviceaccounts
	I0807 19:38:16.798805     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/namespaces/default/serviceaccounts
	I0807 19:38:16.798805     956 round_trippers.go:469] Request Headers:
	I0807 19:38:16.798805     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:38:16.798805     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:38:16.805453     956 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0807 19:38:16.805527     956 round_trippers.go:577] Response Headers:
	I0807 19:38:16.805527     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:38:16.805527     956 round_trippers.go:580]     Content-Length: 261
	I0807 19:38:16.805527     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:38:16 GMT
	I0807 19:38:16.805527     956 round_trippers.go:580]     Audit-Id: 01be3dad-6af4-4c80-8acb-e9450677d3b4
	I0807 19:38:16.805586     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:38:16.805586     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:38:16.805586     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:38:16.805586     956 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"471"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"f9ade84e-dceb-49d5-8e06-66799b7c129c","resourceVersion":"345","creationTimestamp":"2024-08-07T19:37:52Z"}}]}
	I0807 19:38:16.805920     956 default_sa.go:45] found service account: "default"
	I0807 19:38:16.805920     956 default_sa.go:55] duration metric: took 199.9529ms for default service account to be created ...
	I0807 19:38:16.805920     956 system_pods.go:116] waiting for k8s-apps to be running ...
	I0807 19:38:17.001269     956 request.go:629] Waited for 195.3465ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.86:8443/api/v1/namespaces/kube-system/pods
	I0807 19:38:17.001269     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/namespaces/kube-system/pods
	I0807 19:38:17.001269     956 round_trippers.go:469] Request Headers:
	I0807 19:38:17.001269     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:38:17.001269     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:38:17.005162     956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 19:38:17.005162     956 round_trippers.go:577] Response Headers:
	I0807 19:38:17.005162     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:38:17.005162     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:38:17 GMT
	I0807 19:38:17.005162     956 round_trippers.go:580]     Audit-Id: d8c6dd77-6003-49a6-9baa-bc88130b7cca
	I0807 19:38:17.005162     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:38:17.005162     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:38:17.005162     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:38:17.007107     956 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"472"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-7l6v2","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7de73f9c-93d9-46c6-ae10-b253dd257a19","resourceVersion":"467","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"26f3775d-afda-4408-b967-dc333bdd23fc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26f3775d-afda-4408-b967-dc333bdd23fc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56451 chars]
	I0807 19:38:17.009964     956 system_pods.go:86] 8 kube-system pods found
	I0807 19:38:17.009964     956 system_pods.go:89] "coredns-7db6d8ff4d-7l6v2" [7de73f9c-93d9-46c6-ae10-b253dd257a19] Running
	I0807 19:38:17.009964     956 system_pods.go:89] "etcd-multinode-116700" [fbae8778-c573-4d9b-a21e-e5fcb236586e] Running
	I0807 19:38:17.009964     956 system_pods.go:89] "kindnet-kltmx" [b2ddfdd4-b957-45e3-b967-cf2650e86069] Running
	I0807 19:38:17.009964     956 system_pods.go:89] "kube-apiserver-multinode-116700" [6a7e36c1-9e53-4565-9998-c5bbbb1ea060] Running
	I0807 19:38:17.009964     956 system_pods.go:89] "kube-controller-manager-multinode-116700" [4d2e8250-9b12-4277-8834-515c1621fc78] Running
	I0807 19:38:17.009964     956 system_pods.go:89] "kube-proxy-fmjt9" [766df91e-8fd0-457b-8c11-8810059ca4d9] Running
	I0807 19:38:17.009964     956 system_pods.go:89] "kube-scheduler-multinode-116700" [7b6df7b7-8c94-498a-bc4c-74d72efd572a] Running
	I0807 19:38:17.009964     956 system_pods.go:89] "storage-provisioner" [8a8036f6-f1a0-4fca-b8dd-ed99c3535b47] Running
	I0807 19:38:17.009964     956 system_pods.go:126] duration metric: took 204.041ms to wait for k8s-apps to be running ...
	I0807 19:38:17.009964     956 system_svc.go:44] waiting for kubelet service to be running ....
	I0807 19:38:17.022780     956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0807 19:38:17.050835     956 system_svc.go:56] duration metric: took 40.8707ms WaitForService to wait for kubelet
	I0807 19:38:17.051556     956 kubeadm.go:582] duration metric: took 24.3822574s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0807 19:38:17.051597     956 node_conditions.go:102] verifying NodePressure condition ...
	I0807 19:38:17.201242     956 request.go:629] Waited for 149.3067ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.86:8443/api/v1/nodes
	I0807 19:38:17.201242     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes
	I0807 19:38:17.201468     956 round_trippers.go:469] Request Headers:
	I0807 19:38:17.201468     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:38:17.201530     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:38:17.205135     956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 19:38:17.205135     956 round_trippers.go:577] Response Headers:
	I0807 19:38:17.205785     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:38:17 GMT
	I0807 19:38:17.205785     956 round_trippers.go:580]     Audit-Id: e50ae438-4e6a-4e4f-8693-cca612fbfed2
	I0807 19:38:17.205785     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:38:17.205785     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:38:17.205785     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:38:17.205785     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:38:17.206043     956 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"473"},"items":[{"metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"448","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5012 chars]
	I0807 19:38:17.206595     956 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0807 19:38:17.206732     956 node_conditions.go:123] node cpu capacity is 2
	I0807 19:38:17.206732     956 node_conditions.go:105] duration metric: took 155.1335ms to run NodePressure ...
	I0807 19:38:17.206803     956 start.go:241] waiting for startup goroutines ...
	I0807 19:38:17.206803     956 start.go:246] waiting for cluster config update ...
	I0807 19:38:17.206803     956 start.go:255] writing updated cluster config ...
	I0807 19:38:17.214771     956 out.go:177] 
	I0807 19:38:17.217301     956 config.go:182] Loaded profile config "ha-766300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 19:38:17.222953     956 config.go:182] Loaded profile config "multinode-116700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 19:38:17.223935     956 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\config.json ...
	I0807 19:38:17.229939     956 out.go:177] * Starting "multinode-116700-m02" worker node in "multinode-116700" cluster
	I0807 19:38:17.232673     956 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0807 19:38:17.232698     956 cache.go:56] Caching tarball of preloaded images
	I0807 19:38:17.232874     956 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0807 19:38:17.232874     956 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0807 19:38:17.232874     956 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\config.json ...
	I0807 19:38:17.237619     956 start.go:360] acquireMachinesLock for multinode-116700-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 19:38:17.237763     956 start.go:364] duration metric: took 144.9µs to acquireMachinesLock for "multinode-116700-m02"
	I0807 19:38:17.238080     956 start.go:93] Provisioning new machine with config: &{Name:multinode-116700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.3 ClusterName:multinode-116700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.224.86 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0807 19:38:17.238268     956 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0807 19:38:17.241985     956 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0807 19:38:17.242672     956 start.go:159] libmachine.API.Create for "multinode-116700" (driver="hyperv")
	I0807 19:38:17.242731     956 client.go:168] LocalClient.Create starting
	I0807 19:38:17.243230     956 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0807 19:38:17.243449     956 main.go:141] libmachine: Decoding PEM data...
	I0807 19:38:17.243449     956 main.go:141] libmachine: Parsing certificate...
	I0807 19:38:17.243449     956 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0807 19:38:17.243449     956 main.go:141] libmachine: Decoding PEM data...
	I0807 19:38:17.243449     956 main.go:141] libmachine: Parsing certificate...
	I0807 19:38:17.243449     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0807 19:38:19.185137     956 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0807 19:38:19.185137     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:38:19.185984     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0807 19:38:20.932959     956 main.go:141] libmachine: [stdout =====>] : False
	
	I0807 19:38:20.932959     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:38:20.933082     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0807 19:38:22.428330     956 main.go:141] libmachine: [stdout =====>] : True
	
	I0807 19:38:22.428330     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:38:22.428330     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0807 19:38:26.172887     956 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0807 19:38:26.172887     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:38:26.175496     956 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0807 19:38:26.665230     956 main.go:141] libmachine: Creating SSH key...
	I0807 19:38:27.066518     956 main.go:141] libmachine: Creating VM...
	I0807 19:38:27.066518     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0807 19:38:30.192484     956 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0807 19:38:30.192484     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:38:30.193093     956 main.go:141] libmachine: Using switch "Default Switch"
	I0807 19:38:30.193093     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0807 19:38:32.036345     956 main.go:141] libmachine: [stdout =====>] : True
	
	I0807 19:38:32.036875     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:38:32.036875     956 main.go:141] libmachine: Creating VHD
	I0807 19:38:32.036875     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-116700-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0807 19:38:36.073163     956 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-116700-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : F5224F84-73D0-4EBF-ABB1-220316215FCD
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0807 19:38:36.073163     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:38:36.073163     956 main.go:141] libmachine: Writing magic tar header
	I0807 19:38:36.073163     956 main.go:141] libmachine: Writing SSH key tar header
	I0807 19:38:36.084538     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-116700-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-116700-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0807 19:38:39.401342     956 main.go:141] libmachine: [stdout =====>] : 
	I0807 19:38:39.401582     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:38:39.401692     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-116700-m02\disk.vhd' -SizeBytes 20000MB
	I0807 19:38:42.048910     956 main.go:141] libmachine: [stdout =====>] : 
	I0807 19:38:42.048910     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:38:42.049513     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-116700-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-116700-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0807 19:38:45.845368     956 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-116700-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0807 19:38:45.845368     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:38:45.846347     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-116700-m02 -DynamicMemoryEnabled $false
	I0807 19:38:48.248130     956 main.go:141] libmachine: [stdout =====>] : 
	I0807 19:38:48.250305     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:38:48.250305     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-116700-m02 -Count 2
	I0807 19:38:50.678269     956 main.go:141] libmachine: [stdout =====>] : 
	I0807 19:38:50.678269     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:38:50.679019     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-116700-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-116700-m02\boot2docker.iso'
	I0807 19:38:53.477571     956 main.go:141] libmachine: [stdout =====>] : 
	I0807 19:38:53.477734     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:38:53.477734     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-116700-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-116700-m02\disk.vhd'
	I0807 19:38:56.391327     956 main.go:141] libmachine: [stdout =====>] : 
	I0807 19:38:56.391327     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:38:56.391327     956 main.go:141] libmachine: Starting VM...
	I0807 19:38:56.391550     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-116700-m02
	I0807 19:38:59.687725     956 main.go:141] libmachine: [stdout =====>] : 
	I0807 19:38:59.687930     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:38:59.687930     956 main.go:141] libmachine: Waiting for host to start...
	I0807 19:38:59.687930     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700-m02 ).state
	I0807 19:39:02.159987     956 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 19:39:02.159987     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:39:02.159987     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 19:39:04.893449     956 main.go:141] libmachine: [stdout =====>] : 
	I0807 19:39:04.893449     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:39:05.895426     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700-m02 ).state
	I0807 19:39:08.289236     956 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 19:39:08.289236     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:39:08.289236     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 19:39:11.060273     956 main.go:141] libmachine: [stdout =====>] : 
	I0807 19:39:11.061234     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:39:12.066678     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700-m02 ).state
	I0807 19:39:14.449671     956 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 19:39:14.450238     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:39:14.450238     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 19:39:17.202094     956 main.go:141] libmachine: [stdout =====>] : 
	I0807 19:39:17.203101     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:39:18.212100     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700-m02 ).state
	I0807 19:39:20.673998     956 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 19:39:20.673998     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:39:20.673998     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 19:39:23.347433     956 main.go:141] libmachine: [stdout =====>] : 
	I0807 19:39:23.347961     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:39:24.359177     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700-m02 ).state
	I0807 19:39:26.721217     956 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 19:39:26.721217     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:39:26.722090     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 19:39:29.435230     956 main.go:141] libmachine: [stdout =====>] : 172.28.226.55
	
	I0807 19:39:29.435923     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:39:29.435923     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700-m02 ).state
	I0807 19:39:31.663534     956 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 19:39:31.663534     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:39:31.663534     956 machine.go:94] provisionDockerMachine start ...
	I0807 19:39:31.665691     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700-m02 ).state
	I0807 19:39:34.013292     956 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 19:39:34.014140     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:39:34.014387     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 19:39:36.672715     956 main.go:141] libmachine: [stdout =====>] : 172.28.226.55
	
	I0807 19:39:36.673255     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:39:36.678625     956 main.go:141] libmachine: Using SSH client type: native
	I0807 19:39:36.691211     956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.226.55 22 <nil> <nil>}
	I0807 19:39:36.691211     956 main.go:141] libmachine: About to run SSH command:
	hostname
	I0807 19:39:36.819872     956 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0807 19:39:36.819872     956 buildroot.go:166] provisioning hostname "multinode-116700-m02"
	I0807 19:39:36.819971     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700-m02 ).state
	I0807 19:39:39.058432     956 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 19:39:39.058432     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:39:39.059019     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 19:39:41.696594     956 main.go:141] libmachine: [stdout =====>] : 172.28.226.55
	
	I0807 19:39:41.697594     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:39:41.703428     956 main.go:141] libmachine: Using SSH client type: native
	I0807 19:39:41.704059     956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.226.55 22 <nil> <nil>}
	I0807 19:39:41.704059     956 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-116700-m02 && echo "multinode-116700-m02" | sudo tee /etc/hostname
	I0807 19:39:41.862936     956 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-116700-m02
	
	I0807 19:39:41.863074     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700-m02 ).state
	I0807 19:39:44.128755     956 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 19:39:44.128755     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:39:44.129763     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 19:39:46.763169     956 main.go:141] libmachine: [stdout =====>] : 172.28.226.55
	
	I0807 19:39:46.763169     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:39:46.770024     956 main.go:141] libmachine: Using SSH client type: native
	I0807 19:39:46.770827     956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.226.55 22 <nil> <nil>}
	I0807 19:39:46.770827     956 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-116700-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-116700-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-116700-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0807 19:39:46.913336     956 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0807 19:39:46.913336     956 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0807 19:39:46.913336     956 buildroot.go:174] setting up certificates
	I0807 19:39:46.913336     956 provision.go:84] configureAuth start
	I0807 19:39:46.913336     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700-m02 ).state
	I0807 19:39:49.206567     956 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 19:39:49.206567     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:39:49.206709     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 19:39:51.840801     956 main.go:141] libmachine: [stdout =====>] : 172.28.226.55
	
	I0807 19:39:51.840801     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:39:51.841583     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700-m02 ).state
	I0807 19:39:54.096709     956 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 19:39:54.097482     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:39:54.097482     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 19:39:56.688728     956 main.go:141] libmachine: [stdout =====>] : 172.28.226.55
	
	I0807 19:39:56.689336     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:39:56.689336     956 provision.go:143] copyHostCerts
	I0807 19:39:56.689573     956 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0807 19:39:56.689754     956 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0807 19:39:56.689754     956 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0807 19:39:56.690327     956 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0807 19:39:56.691504     956 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0807 19:39:56.691856     956 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0807 19:39:56.691892     956 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0807 19:39:56.692298     956 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0807 19:39:56.693265     956 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0807 19:39:56.693558     956 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0807 19:39:56.693558     956 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0807 19:39:56.694423     956 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0807 19:39:56.695168     956 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-116700-m02 san=[127.0.0.1 172.28.226.55 localhost minikube multinode-116700-m02]
	I0807 19:39:56.996418     956 provision.go:177] copyRemoteCerts
	I0807 19:39:57.010608     956 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0807 19:39:57.010608     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700-m02 ).state
	I0807 19:39:59.275777     956 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 19:39:59.275777     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:39:59.275933     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 19:40:01.953836     956 main.go:141] libmachine: [stdout =====>] : 172.28.226.55
	
	I0807 19:40:01.954892     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:40:01.954959     956 sshutil.go:53] new ssh client: &{IP:172.28.226.55 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-116700-m02\id_rsa Username:docker}
	I0807 19:40:02.068034     956 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.0573611s)
	I0807 19:40:02.069094     956 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0807 19:40:02.069236     956 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0807 19:40:02.117162     956 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0807 19:40:02.117473     956 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0807 19:40:02.164607     956 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0807 19:40:02.165062     956 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0807 19:40:02.210430     956 provision.go:87] duration metric: took 15.2968989s to configureAuth
	I0807 19:40:02.210560     956 buildroot.go:189] setting minikube options for container-runtime
	I0807 19:40:02.211402     956 config.go:182] Loaded profile config "multinode-116700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 19:40:02.211479     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700-m02 ).state
	I0807 19:40:04.537087     956 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 19:40:04.537087     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:40:04.537087     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 19:40:07.201451     956 main.go:141] libmachine: [stdout =====>] : 172.28.226.55
	
	I0807 19:40:07.201451     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:40:07.208187     956 main.go:141] libmachine: Using SSH client type: native
	I0807 19:40:07.208312     956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.226.55 22 <nil> <nil>}
	I0807 19:40:07.208312     956 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0807 19:40:07.347286     956 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0807 19:40:07.347286     956 buildroot.go:70] root file system type: tmpfs
	I0807 19:40:07.347895     956 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0807 19:40:07.347895     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700-m02 ).state
	I0807 19:40:09.597411     956 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 19:40:09.597481     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:40:09.597481     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 19:40:12.296120     956 main.go:141] libmachine: [stdout =====>] : 172.28.226.55
	
	I0807 19:40:12.296948     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:40:12.302554     956 main.go:141] libmachine: Using SSH client type: native
	I0807 19:40:12.302554     956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.226.55 22 <nil> <nil>}
	I0807 19:40:12.302554     956 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.28.224.86"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0807 19:40:12.455499     956 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.28.224.86
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0807 19:40:12.455499     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700-m02 ).state
	I0807 19:40:14.687193     956 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 19:40:14.687193     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:40:14.687193     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 19:40:17.322066     956 main.go:141] libmachine: [stdout =====>] : 172.28.226.55
	
	I0807 19:40:17.322298     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:40:17.327093     956 main.go:141] libmachine: Using SSH client type: native
	I0807 19:40:17.327803     956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.226.55 22 <nil> <nil>}
	I0807 19:40:17.327803     956 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0807 19:40:19.562755     956 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0807 19:40:19.562755     956 machine.go:97] duration metric: took 47.898608s to provisionDockerMachine
	I0807 19:40:19.562755     956 client.go:171] duration metric: took 2m2.3184601s to LocalClient.Create
	I0807 19:40:19.562755     956 start.go:167] duration metric: took 2m2.3185186s to libmachine.API.Create "multinode-116700"
	I0807 19:40:19.562755     956 start.go:293] postStartSetup for "multinode-116700-m02" (driver="hyperv")
	I0807 19:40:19.562755     956 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0807 19:40:19.575956     956 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0807 19:40:19.575956     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700-m02 ).state
	I0807 19:40:21.796235     956 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 19:40:21.796235     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:40:21.796393     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 19:40:24.461641     956 main.go:141] libmachine: [stdout =====>] : 172.28.226.55
	
	I0807 19:40:24.461641     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:40:24.461641     956 sshutil.go:53] new ssh client: &{IP:172.28.226.55 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-116700-m02\id_rsa Username:docker}
	I0807 19:40:24.564533     956 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9885137s)
	I0807 19:40:24.578758     956 ssh_runner.go:195] Run: cat /etc/os-release
	I0807 19:40:24.585064     956 command_runner.go:130] > NAME=Buildroot
	I0807 19:40:24.585064     956 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0807 19:40:24.585064     956 command_runner.go:130] > ID=buildroot
	I0807 19:40:24.585064     956 command_runner.go:130] > VERSION_ID=2023.02.9
	I0807 19:40:24.585064     956 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0807 19:40:24.585064     956 info.go:137] Remote host: Buildroot 2023.02.9
	I0807 19:40:24.585064     956 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0807 19:40:24.585064     956 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0807 19:40:24.586813     956 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96602.pem -> 96602.pem in /etc/ssl/certs
	I0807 19:40:24.586870     956 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96602.pem -> /etc/ssl/certs/96602.pem
	I0807 19:40:24.598464     956 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0807 19:40:24.616147     956 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96602.pem --> /etc/ssl/certs/96602.pem (1708 bytes)
	I0807 19:40:24.669077     956 start.go:296] duration metric: took 5.1062562s for postStartSetup
	I0807 19:40:24.672141     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700-m02 ).state
	I0807 19:40:26.900814     956 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 19:40:26.900814     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:40:26.901087     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 19:40:29.549278     956 main.go:141] libmachine: [stdout =====>] : 172.28.226.55
	
	I0807 19:40:29.549278     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:40:29.549566     956 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\config.json ...
	I0807 19:40:29.552290     956 start.go:128] duration metric: took 2m12.3122901s to createHost
	I0807 19:40:29.552541     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700-m02 ).state
	I0807 19:40:31.784120     956 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 19:40:31.784120     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:40:31.784120     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 19:40:34.459698     956 main.go:141] libmachine: [stdout =====>] : 172.28.226.55
	
	I0807 19:40:34.459953     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:40:34.466625     956 main.go:141] libmachine: Using SSH client type: native
	I0807 19:40:34.466928     956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.226.55 22 <nil> <nil>}
	I0807 19:40:34.466928     956 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0807 19:40:34.608168     956 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723059634.627867026
	
	I0807 19:40:34.608168     956 fix.go:216] guest clock: 1723059634.627867026
	I0807 19:40:34.608168     956 fix.go:229] Guest: 2024-08-07 19:40:34.627867026 +0000 UTC Remote: 2024-08-07 19:40:29.5524403 +0000 UTC m=+360.824927501 (delta=5.075426726s)
	I0807 19:40:34.608248     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700-m02 ).state
	I0807 19:40:36.837719     956 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 19:40:36.837719     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:40:36.838134     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 19:40:39.501549     956 main.go:141] libmachine: [stdout =====>] : 172.28.226.55
	
	I0807 19:40:39.501549     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:40:39.507323     956 main.go:141] libmachine: Using SSH client type: native
	I0807 19:40:39.507434     956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.226.55 22 <nil> <nil>}
	I0807 19:40:39.507974     956 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1723059634
	I0807 19:40:39.652748     956 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Aug  7 19:40:34 UTC 2024
	
	I0807 19:40:39.652748     956 fix.go:236] clock set: Wed Aug  7 19:40:34 UTC 2024
	 (err=<nil>)
	I0807 19:40:39.652748     956 start.go:83] releasing machines lock for "multinode-116700-m02", held for 2m22.4130793s
	I0807 19:40:39.653117     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700-m02 ).state
	I0807 19:40:41.944575     956 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 19:40:41.944575     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:40:41.944575     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 19:40:44.653055     956 main.go:141] libmachine: [stdout =====>] : 172.28.226.55
	
	I0807 19:40:44.653276     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:40:44.656324     956 out.go:177] * Found network options:
	I0807 19:40:44.659138     956 out.go:177]   - NO_PROXY=172.28.224.86
	W0807 19:40:44.662237     956 proxy.go:119] fail to check proxy env: Error ip not in block
	I0807 19:40:44.664878     956 out.go:177]   - NO_PROXY=172.28.224.86
	W0807 19:40:44.667656     956 proxy.go:119] fail to check proxy env: Error ip not in block
	W0807 19:40:44.668731     956 proxy.go:119] fail to check proxy env: Error ip not in block
	I0807 19:40:44.671901     956 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0807 19:40:44.671901     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700-m02 ).state
	I0807 19:40:44.683137     956 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0807 19:40:44.683137     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700-m02 ).state
	I0807 19:40:47.013863     956 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 19:40:47.013863     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:40:47.013863     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 19:40:47.033704     956 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 19:40:47.033704     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:40:47.033704     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 19:40:49.849432     956 main.go:141] libmachine: [stdout =====>] : 172.28.226.55
	
	I0807 19:40:49.849432     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:40:49.850592     956 sshutil.go:53] new ssh client: &{IP:172.28.226.55 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-116700-m02\id_rsa Username:docker}
	I0807 19:40:49.874408     956 main.go:141] libmachine: [stdout =====>] : 172.28.226.55
	
	I0807 19:40:49.874614     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:40:49.874688     956 sshutil.go:53] new ssh client: &{IP:172.28.226.55 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-116700-m02\id_rsa Username:docker}
	I0807 19:40:49.947024     956 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0807 19:40:49.947823     956 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.2758541s)
	W0807 19:40:49.948038     956 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0807 19:40:49.966240     956 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0807 19:40:49.967110     956 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.2837675s)
	W0807 19:40:49.967110     956 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0807 19:40:49.980010     956 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0807 19:40:50.008053     956 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0807 19:40:50.008053     956 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0807 19:40:50.008053     956 start.go:495] detecting cgroup driver to use...
	I0807 19:40:50.008308     956 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0807 19:40:50.048386     956 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0807 19:40:50.060767     956 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	W0807 19:40:50.064837     956 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0807 19:40:50.064837     956 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0807 19:40:50.094188     956 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0807 19:40:50.114145     956 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0807 19:40:50.125498     956 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0807 19:40:50.156408     956 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0807 19:40:50.186659     956 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0807 19:40:50.218772     956 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0807 19:40:50.250953     956 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0807 19:40:50.282088     956 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0807 19:40:50.313241     956 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0807 19:40:50.345317     956 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0807 19:40:50.377415     956 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0807 19:40:50.396471     956 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0807 19:40:50.409282     956 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0807 19:40:50.440233     956 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 19:40:50.637766     956 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0807 19:40:50.670762     956 start.go:495] detecting cgroup driver to use...
	I0807 19:40:50.682284     956 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0807 19:40:50.705791     956 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0807 19:40:50.705791     956 command_runner.go:130] > [Unit]
	I0807 19:40:50.706419     956 command_runner.go:130] > Description=Docker Application Container Engine
	I0807 19:40:50.706419     956 command_runner.go:130] > Documentation=https://docs.docker.com
	I0807 19:40:50.706454     956 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0807 19:40:50.706454     956 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0807 19:40:50.706480     956 command_runner.go:130] > StartLimitBurst=3
	I0807 19:40:50.706480     956 command_runner.go:130] > StartLimitIntervalSec=60
	I0807 19:40:50.706480     956 command_runner.go:130] > [Service]
	I0807 19:40:50.706480     956 command_runner.go:130] > Type=notify
	I0807 19:40:50.706480     956 command_runner.go:130] > Restart=on-failure
	I0807 19:40:50.706480     956 command_runner.go:130] > Environment=NO_PROXY=172.28.224.86
	I0807 19:40:50.706480     956 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0807 19:40:50.706480     956 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0807 19:40:50.706480     956 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0807 19:40:50.706480     956 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0807 19:40:50.706480     956 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0807 19:40:50.706480     956 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0807 19:40:50.706480     956 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0807 19:40:50.706480     956 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0807 19:40:50.706480     956 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0807 19:40:50.706480     956 command_runner.go:130] > ExecStart=
	I0807 19:40:50.706480     956 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0807 19:40:50.706480     956 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0807 19:40:50.706480     956 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0807 19:40:50.706480     956 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0807 19:40:50.706480     956 command_runner.go:130] > LimitNOFILE=infinity
	I0807 19:40:50.706480     956 command_runner.go:130] > LimitNPROC=infinity
	I0807 19:40:50.706480     956 command_runner.go:130] > LimitCORE=infinity
	I0807 19:40:50.706480     956 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0807 19:40:50.706480     956 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0807 19:40:50.706480     956 command_runner.go:130] > TasksMax=infinity
	I0807 19:40:50.706480     956 command_runner.go:130] > TimeoutStartSec=0
	I0807 19:40:50.706480     956 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0807 19:40:50.706480     956 command_runner.go:130] > Delegate=yes
	I0807 19:40:50.706480     956 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0807 19:40:50.706480     956 command_runner.go:130] > KillMode=process
	I0807 19:40:50.706480     956 command_runner.go:130] > [Install]
	I0807 19:40:50.706480     956 command_runner.go:130] > WantedBy=multi-user.target
	I0807 19:40:50.719339     956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0807 19:40:50.753825     956 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0807 19:40:50.795933     956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0807 19:40:50.831402     956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0807 19:40:50.866518     956 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0807 19:40:50.935703     956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0807 19:40:50.958502     956 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0807 19:40:50.994657     956 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0807 19:40:51.005895     956 ssh_runner.go:195] Run: which cri-dockerd
	I0807 19:40:51.013220     956 command_runner.go:130] > /usr/bin/cri-dockerd
	I0807 19:40:51.026165     956 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0807 19:40:51.045611     956 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0807 19:40:51.094315     956 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0807 19:40:51.293165     956 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0807 19:40:51.477147     956 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0807 19:40:51.477147     956 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0807 19:40:51.528629     956 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 19:40:51.732622     956 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0807 19:40:54.320906     956 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5882514s)
	I0807 19:40:54.333735     956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0807 19:40:54.371285     956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0807 19:40:54.407463     956 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0807 19:40:54.626057     956 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0807 19:40:54.820918     956 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 19:40:55.027639     956 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0807 19:40:55.074700     956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0807 19:40:55.111136     956 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 19:40:55.302690     956 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0807 19:40:55.404026     956 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0807 19:40:55.416016     956 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0807 19:40:55.424023     956 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0807 19:40:55.424023     956 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0807 19:40:55.424023     956 command_runner.go:130] > Device: 0,22	Inode: 873         Links: 1
	I0807 19:40:55.424023     956 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0807 19:40:55.424023     956 command_runner.go:130] > Access: 2024-08-07 19:40:55.346539497 +0000
	I0807 19:40:55.424023     956 command_runner.go:130] > Modify: 2024-08-07 19:40:55.346539497 +0000
	I0807 19:40:55.424023     956 command_runner.go:130] > Change: 2024-08-07 19:40:55.349539505 +0000
	I0807 19:40:55.424023     956 command_runner.go:130] >  Birth: -
	I0807 19:40:55.424023     956 start.go:563] Will wait 60s for crictl version
	I0807 19:40:55.435843     956 ssh_runner.go:195] Run: which crictl
	I0807 19:40:55.441644     956 command_runner.go:130] > /usr/bin/crictl
	I0807 19:40:55.454215     956 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0807 19:40:55.507054     956 command_runner.go:130] > Version:  0.1.0
	I0807 19:40:55.507054     956 command_runner.go:130] > RuntimeName:  docker
	I0807 19:40:55.507054     956 command_runner.go:130] > RuntimeVersion:  27.1.1
	I0807 19:40:55.507054     956 command_runner.go:130] > RuntimeApiVersion:  v1
	I0807 19:40:55.507054     956 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.1
	RuntimeApiVersion:  v1
	I0807 19:40:55.517251     956 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0807 19:40:55.554428     956 command_runner.go:130] > 27.1.1
	I0807 19:40:55.564318     956 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0807 19:40:55.594632     956 command_runner.go:130] > 27.1.1
	I0807 19:40:55.601288     956 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...
	I0807 19:40:55.604052     956 out.go:177]   - env NO_PROXY=172.28.224.86
	I0807 19:40:55.607382     956 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0807 19:40:55.611417     956 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0807 19:40:55.611417     956 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0807 19:40:55.611417     956 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0807 19:40:55.611417     956 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:f6:3a:6a Flags:up|broadcast|multicast|running}
	I0807 19:40:55.614627     956 ip.go:210] interface addr: fe80::e7eb:b592:d388:ff99/64
	I0807 19:40:55.614627     956 ip.go:210] interface addr: 172.28.224.1/20
	I0807 19:40:55.629231     956 ssh_runner.go:195] Run: grep 172.28.224.1	host.minikube.internal$ /etc/hosts
	I0807 19:40:55.637250     956 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.28.224.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0807 19:40:55.662875     956 mustload.go:65] Loading cluster: multinode-116700
	I0807 19:40:55.663553     956 config.go:182] Loaded profile config "multinode-116700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 19:40:55.663622     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700 ).state
	I0807 19:40:57.942532     956 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 19:40:57.942532     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:40:57.943042     956 host.go:66] Checking if "multinode-116700" exists ...
	I0807 19:40:57.943694     956 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700 for IP: 172.28.226.55
	I0807 19:40:57.943694     956 certs.go:194] generating shared ca certs ...
	I0807 19:40:57.943694     956 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 19:40:57.944518     956 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0807 19:40:57.945084     956 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0807 19:40:57.945162     956 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0807 19:40:57.945563     956 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0807 19:40:57.945665     956 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0807 19:40:57.945965     956 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0807 19:40:57.946723     956 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9660.pem (1338 bytes)
	W0807 19:40:57.946723     956 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9660_empty.pem, impossibly tiny 0 bytes
	I0807 19:40:57.946723     956 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0807 19:40:57.947349     956 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0807 19:40:57.947803     956 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0807 19:40:57.948109     956 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0807 19:40:57.948109     956 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96602.pem (1708 bytes)
	I0807 19:40:57.948854     956 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96602.pem -> /usr/share/ca-certificates/96602.pem
	I0807 19:40:57.949060     956 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0807 19:40:57.949252     956 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9660.pem -> /usr/share/ca-certificates/9660.pem
	I0807 19:40:57.949590     956 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0807 19:40:58.000441     956 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0807 19:40:58.047641     956 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0807 19:40:58.095176     956 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0807 19:40:58.142220     956 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96602.pem --> /usr/share/ca-certificates/96602.pem (1708 bytes)
	I0807 19:40:58.187723     956 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0807 19:40:58.235165     956 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9660.pem --> /usr/share/ca-certificates/9660.pem (1338 bytes)
	I0807 19:40:58.296701     956 ssh_runner.go:195] Run: openssl version
	I0807 19:40:58.305626     956 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0807 19:40:58.318876     956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9660.pem && ln -fs /usr/share/ca-certificates/9660.pem /etc/ssl/certs/9660.pem"
	I0807 19:40:58.352115     956 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9660.pem
	I0807 19:40:58.359545     956 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug  7 17:50 /usr/share/ca-certificates/9660.pem
	I0807 19:40:58.359734     956 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  7 17:50 /usr/share/ca-certificates/9660.pem
	I0807 19:40:58.367062     956 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9660.pem
	I0807 19:40:58.390378     956 command_runner.go:130] > 51391683
	I0807 19:40:58.402782     956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9660.pem /etc/ssl/certs/51391683.0"
	I0807 19:40:58.435195     956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/96602.pem && ln -fs /usr/share/ca-certificates/96602.pem /etc/ssl/certs/96602.pem"
	I0807 19:40:58.465976     956 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/96602.pem
	I0807 19:40:58.473876     956 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug  7 17:50 /usr/share/ca-certificates/96602.pem
	I0807 19:40:58.473876     956 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  7 17:50 /usr/share/ca-certificates/96602.pem
	I0807 19:40:58.485831     956 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/96602.pem
	I0807 19:40:58.494399     956 command_runner.go:130] > 3ec20f2e
	I0807 19:40:58.508788     956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/96602.pem /etc/ssl/certs/3ec20f2e.0"
	I0807 19:40:58.541882     956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0807 19:40:58.572857     956 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0807 19:40:58.582876     956 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug  7 17:33 /usr/share/ca-certificates/minikubeCA.pem
	I0807 19:40:58.582876     956 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  7 17:33 /usr/share/ca-certificates/minikubeCA.pem
	I0807 19:40:58.594853     956 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0807 19:40:58.604502     956 command_runner.go:130] > b5213941
	I0807 19:40:58.615587     956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0807 19:40:58.646410     956 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0807 19:40:58.652401     956 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0807 19:40:58.652801     956 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0807 19:40:58.653376     956 kubeadm.go:934] updating node {m02 172.28.226.55 8443 v1.30.3 docker false true} ...
	I0807 19:40:58.653692     956 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-116700-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.28.226.55
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-116700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0807 19:40:58.667521     956 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0807 19:40:58.686561     956 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	I0807 19:40:58.686561     956 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0807 19:40:58.698603     956 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0807 19:40:58.718382     956 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256
	I0807 19:40:58.718496     956 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256
	I0807 19:40:58.718614     956 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0807 19:40:58.718690     956 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0807 19:40:58.718690     956 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0807 19:40:58.738574     956 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0807 19:40:58.738574     956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0807 19:40:58.738574     956 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0807 19:40:58.750600     956 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0807 19:40:58.750688     956 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0807 19:40:58.750844     956 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0807 19:40:58.780982     956 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0807 19:40:58.780982     956 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0807 19:40:58.781245     956 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0807 19:40:58.781321     956 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0807 19:40:58.794040     956 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0807 19:40:58.841656     956 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0807 19:40:58.848917     956 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0807 19:40:58.848917     956 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0807 19:41:00.056221     956 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0807 19:41:00.078412     956 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (320 bytes)
	I0807 19:41:00.112810     956 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0807 19:41:00.164536     956 ssh_runner.go:195] Run: grep 172.28.224.86	control-plane.minikube.internal$ /etc/hosts
	I0807 19:41:00.171245     956 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.28.224.86	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0807 19:41:00.213803     956 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 19:41:00.417887     956 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0807 19:41:00.447190     956 host.go:66] Checking if "multinode-116700" exists ...
	I0807 19:41:00.448179     956 start.go:317] joinCluster: &{Name:multinode-116700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:multinode-116700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.224.86 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.226.55 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertEx
piration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 19:41:00.448308     956 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0807 19:41:00.448368     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700 ).state
	I0807 19:41:02.707089     956 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 19:41:02.707089     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:41:02.707233     956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700 ).networkadapters[0]).ipaddresses[0]
	I0807 19:41:05.400523     956 main.go:141] libmachine: [stdout =====>] : 172.28.224.86
	
	I0807 19:41:05.400690     956 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:41:05.401309     956 sshutil.go:53] new ssh client: &{IP:172.28.224.86 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-116700\id_rsa Username:docker}
	I0807 19:41:05.598286     956 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token sqjauf.5bf0ymuug4y53891 --discovery-token-ca-cert-hash sha256:54111b4769238fc338d0511ba57c177f732a6d68c0b9cd0aa8cacbbf3c79643b 
	I0807 19:41:05.601207     956 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0": (5.1526204s)
	I0807 19:41:05.601340     956 start.go:343] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.28.226.55 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0807 19:41:05.601340     956 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token sqjauf.5bf0ymuug4y53891 --discovery-token-ca-cert-hash sha256:54111b4769238fc338d0511ba57c177f732a6d68c0b9cd0aa8cacbbf3c79643b --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-116700-m02"
	I0807 19:41:05.825390     956 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0807 19:41:07.640775     956 command_runner.go:130] > [preflight] Running pre-flight checks
	I0807 19:41:07.640877     956 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0807 19:41:07.640877     956 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0807 19:41:07.640877     956 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0807 19:41:07.640877     956 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0807 19:41:07.640877     956 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0807 19:41:07.640877     956 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0807 19:41:07.640877     956 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.001775333s
	I0807 19:41:07.640877     956 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
	I0807 19:41:07.641043     956 command_runner.go:130] > This node has joined the cluster:
	I0807 19:41:07.641063     956 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0807 19:41:07.641063     956 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0807 19:41:07.641091     956 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0807 19:41:07.641125     956 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token sqjauf.5bf0ymuug4y53891 --discovery-token-ca-cert-hash sha256:54111b4769238fc338d0511ba57c177f732a6d68c0b9cd0aa8cacbbf3c79643b --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-116700-m02": (2.0397587s)
	I0807 19:41:07.641125     956 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0807 19:41:07.863457     956 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0807 19:41:08.079384     956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-116700-m02 minikube.k8s.io/updated_at=2024_08_07T19_41_08_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e minikube.k8s.io/name=multinode-116700 minikube.k8s.io/primary=false
	I0807 19:41:08.208611     956 command_runner.go:130] > node/multinode-116700-m02 labeled
	I0807 19:41:08.208757     956 start.go:319] duration metric: took 7.7604786s to joinCluster
	I0807 19:41:08.208939     956 start.go:235] Will wait 6m0s for node &{Name:m02 IP:172.28.226.55 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0807 19:41:08.209644     956 config.go:182] Loaded profile config "multinode-116700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 19:41:08.212797     956 out.go:177] * Verifying Kubernetes components...
	I0807 19:41:08.226384     956 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 19:41:08.439790     956 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0807 19:41:08.470677     956 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0807 19:41:08.471559     956 kapi.go:59] client config for multinode-116700: &rest.Config{Host:"https://172.28.224.86:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-116700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-116700\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1da64c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0807 19:41:08.472766     956 node_ready.go:35] waiting up to 6m0s for node "multinode-116700-m02" to be "Ready" ...
	I0807 19:41:08.472962     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700-m02
	I0807 19:41:08.473023     956 round_trippers.go:469] Request Headers:
	I0807 19:41:08.473023     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:41:08.473023     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:41:08.490561     956 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0807 19:41:08.490561     956 round_trippers.go:577] Response Headers:
	I0807 19:41:08.490561     956 round_trippers.go:580]     Audit-Id: 6a4842fd-a1cb-411d-889d-75bc44be8350
	I0807 19:41:08.490561     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:41:08.490779     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:41:08.490779     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:41:08.490779     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:41:08.490779     956 round_trippers.go:580]     Content-Length: 4029
	I0807 19:41:08.490819     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:41:08 GMT
	I0807 19:41:08.490869     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700-m02","uid":"95fa38ae-e99d-47d4-a12c-06eb42d66eb3","resourceVersion":"640","creationTimestamp":"2024-08-07T19:41:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_07T19_41_08_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:41:07Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3005 chars]
	I0807 19:41:08.984423     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700-m02
	I0807 19:41:08.984423     956 round_trippers.go:469] Request Headers:
	I0807 19:41:08.984423     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:41:08.984423     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:41:08.988765     956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 19:41:08.988765     956 round_trippers.go:577] Response Headers:
	I0807 19:41:08.988765     956 round_trippers.go:580]     Audit-Id: b6c14618-6291-4d3f-ba8a-f82e89ccbba8
	I0807 19:41:08.988765     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:41:08.988827     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:41:08.988827     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:41:08.988857     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:41:08.988857     956 round_trippers.go:580]     Content-Length: 4029
	I0807 19:41:08.988857     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:41:09 GMT
	I0807 19:41:08.989020     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700-m02","uid":"95fa38ae-e99d-47d4-a12c-06eb42d66eb3","resourceVersion":"640","creationTimestamp":"2024-08-07T19:41:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_07T19_41_08_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:41:07Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3005 chars]
	I0807 19:41:09.481359     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700-m02
	I0807 19:41:09.481359     956 round_trippers.go:469] Request Headers:
	I0807 19:41:09.481359     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:41:09.481359     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:41:09.485746     956 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 19:41:09.485746     956 round_trippers.go:577] Response Headers:
	I0807 19:41:09.485746     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:41:09.485746     956 round_trippers.go:580]     Content-Length: 4029
	I0807 19:41:09.485897     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:41:09 GMT
	I0807 19:41:09.485897     956 round_trippers.go:580]     Audit-Id: 79e88478-0fa2-4e9a-b271-5a72f7637014
	I0807 19:41:09.485897     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:41:09.485897     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:41:09.485897     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:41:09.486097     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700-m02","uid":"95fa38ae-e99d-47d4-a12c-06eb42d66eb3","resourceVersion":"640","creationTimestamp":"2024-08-07T19:41:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_07T19_41_08_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:41:07Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3005 chars]
	I0807 19:41:09.980247     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700-m02
	I0807 19:41:09.980384     956 round_trippers.go:469] Request Headers:
	I0807 19:41:09.980384     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:41:09.980384     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:41:09.983796     956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 19:41:09.984405     956 round_trippers.go:577] Response Headers:
	I0807 19:41:09.984405     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:41:09.984405     956 round_trippers.go:580]     Content-Length: 4029
	I0807 19:41:09.984405     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:41:10 GMT
	I0807 19:41:09.984405     956 round_trippers.go:580]     Audit-Id: 9eaa9e7c-207b-4f61-b4e9-ab0492130032
	I0807 19:41:09.984405     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:41:09.984405     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:41:09.984405     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:41:09.984405     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700-m02","uid":"95fa38ae-e99d-47d4-a12c-06eb42d66eb3","resourceVersion":"640","creationTimestamp":"2024-08-07T19:41:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_07T19_41_08_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:41:07Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3005 chars]
	I0807 19:41:10.479710     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700-m02
	I0807 19:41:10.479710     956 round_trippers.go:469] Request Headers:
	I0807 19:41:10.479813     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:41:10.479813     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:41:10.483891     956 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 19:41:10.483984     956 round_trippers.go:577] Response Headers:
	I0807 19:41:10.483984     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:41:10.483984     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:41:10.483984     956 round_trippers.go:580]     Content-Length: 4029
	I0807 19:41:10.483984     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:41:10 GMT
	I0807 19:41:10.483984     956 round_trippers.go:580]     Audit-Id: 4e513e72-16c6-421a-98f3-de926e7f16a2
	I0807 19:41:10.483984     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:41:10.483984     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:41:10.484244     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700-m02","uid":"95fa38ae-e99d-47d4-a12c-06eb42d66eb3","resourceVersion":"640","creationTimestamp":"2024-08-07T19:41:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_07T19_41_08_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:41:07Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3005 chars]
	I0807 19:41:10.484733     956 node_ready.go:53] node "multinode-116700-m02" has status "Ready":"False"
	I0807 19:41:10.977589     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700-m02
	I0807 19:41:10.977701     956 round_trippers.go:469] Request Headers:
	I0807 19:41:10.977701     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:41:10.977701     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:41:10.981043     956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 19:41:10.981492     956 round_trippers.go:577] Response Headers:
	I0807 19:41:10.981492     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:41:10.981492     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:41:10.981492     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:41:10.981492     956 round_trippers.go:580]     Content-Length: 4029
	I0807 19:41:10.981492     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:41:11 GMT
	I0807 19:41:10.981492     956 round_trippers.go:580]     Audit-Id: 7c5566d5-21c2-4751-8c66-4a7ed84583e2
	I0807 19:41:10.981492     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:41:10.981777     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700-m02","uid":"95fa38ae-e99d-47d4-a12c-06eb42d66eb3","resourceVersion":"640","creationTimestamp":"2024-08-07T19:41:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_07T19_41_08_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:41:07Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3005 chars]
	I0807 19:41:11.475581     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700-m02
	I0807 19:41:11.475813     956 round_trippers.go:469] Request Headers:
	I0807 19:41:11.475813     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:41:11.475813     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:41:11.479522     956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 19:41:11.479522     956 round_trippers.go:577] Response Headers:
	I0807 19:41:11.479522     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:41:11.479522     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:41:11.479522     956 round_trippers.go:580]     Content-Length: 4029
	I0807 19:41:11.479522     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:41:11 GMT
	I0807 19:41:11.479522     956 round_trippers.go:580]     Audit-Id: cb993949-6c4f-4930-ba9a-6ebfdaf571a4
	I0807 19:41:11.479522     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:41:11.479522     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:41:11.480740     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700-m02","uid":"95fa38ae-e99d-47d4-a12c-06eb42d66eb3","resourceVersion":"640","creationTimestamp":"2024-08-07T19:41:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_07T19_41_08_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:41:07Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3005 chars]
	I0807 19:41:11.976445     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700-m02
	I0807 19:41:11.976445     956 round_trippers.go:469] Request Headers:
	I0807 19:41:11.976524     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:41:11.976524     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:41:11.980638     956 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 19:41:11.980638     956 round_trippers.go:577] Response Headers:
	I0807 19:41:11.980638     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:41:12 GMT
	I0807 19:41:11.980638     956 round_trippers.go:580]     Audit-Id: 22d06962-b5ab-492d-b7c7-7b282dd79ee5
	I0807 19:41:11.980638     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:41:11.981023     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:41:11.981023     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:41:11.981023     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:41:11.981023     956 round_trippers.go:580]     Content-Length: 4029
	I0807 19:41:11.981172     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700-m02","uid":"95fa38ae-e99d-47d4-a12c-06eb42d66eb3","resourceVersion":"640","creationTimestamp":"2024-08-07T19:41:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_07T19_41_08_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:41:07Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3005 chars]
	I0807 19:41:12.486439     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700-m02
	I0807 19:41:12.486439     956 round_trippers.go:469] Request Headers:
	I0807 19:41:12.486556     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:41:12.486556     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:41:12.490256     956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 19:41:12.491052     956 round_trippers.go:577] Response Headers:
	I0807 19:41:12.491052     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:41:12.491052     956 round_trippers.go:580]     Content-Length: 4029
	I0807 19:41:12.491052     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:41:12 GMT
	I0807 19:41:12.491052     956 round_trippers.go:580]     Audit-Id: 63a3dff8-73c3-4442-9528-3122f3b9471c
	I0807 19:41:12.491052     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:41:12.491052     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:41:12.491052     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:41:12.491286     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700-m02","uid":"95fa38ae-e99d-47d4-a12c-06eb42d66eb3","resourceVersion":"640","creationTimestamp":"2024-08-07T19:41:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_07T19_41_08_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:41:07Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3005 chars]
	I0807 19:41:12.491412     956 node_ready.go:53] node "multinode-116700-m02" has status "Ready":"False"
	I0807 19:41:12.984264     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700-m02
	I0807 19:41:12.984264     956 round_trippers.go:469] Request Headers:
	I0807 19:41:12.984264     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:41:12.984264     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:41:12.987945     956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 19:41:12.987945     956 round_trippers.go:577] Response Headers:
	I0807 19:41:12.987945     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:41:13 GMT
	I0807 19:41:12.987945     956 round_trippers.go:580]     Audit-Id: 6e711467-13b1-4c1a-a022-2af67ec35130
	I0807 19:41:12.987945     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:41:12.987945     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:41:12.987945     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:41:12.987945     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:41:12.987945     956 round_trippers.go:580]     Content-Length: 4029
	I0807 19:41:12.989297     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700-m02","uid":"95fa38ae-e99d-47d4-a12c-06eb42d66eb3","resourceVersion":"640","creationTimestamp":"2024-08-07T19:41:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_07T19_41_08_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:41:07Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3005 chars]
	I0807 19:41:13.483947     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700-m02
	I0807 19:41:13.483947     956 round_trippers.go:469] Request Headers:
	I0807 19:41:13.483947     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:41:13.483947     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:41:13.488541     956 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 19:41:13.489283     956 round_trippers.go:577] Response Headers:
	I0807 19:41:13.489283     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:41:13.489372     956 round_trippers.go:580]     Content-Length: 4029
	I0807 19:41:13.489372     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:41:13 GMT
	I0807 19:41:13.489372     956 round_trippers.go:580]     Audit-Id: b05186f2-3e15-4cc0-9db7-2f1a62f7d0f1
	I0807 19:41:13.489372     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:41:13.489372     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:41:13.489372     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:41:13.489372     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700-m02","uid":"95fa38ae-e99d-47d4-a12c-06eb42d66eb3","resourceVersion":"640","creationTimestamp":"2024-08-07T19:41:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_07T19_41_08_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:41:07Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3005 chars]
	I0807 19:41:13.985135     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700-m02
	I0807 19:41:13.985351     956 round_trippers.go:469] Request Headers:
	I0807 19:41:13.985351     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:41:13.985351     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:41:13.990291     956 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 19:41:13.990489     956 round_trippers.go:577] Response Headers:
	I0807 19:41:13.990489     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:41:14 GMT
	I0807 19:41:13.990554     956 round_trippers.go:580]     Audit-Id: c6900525-a729-4a23-825c-71aaa9aad721
	I0807 19:41:13.990554     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:41:13.990554     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:41:13.990554     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:41:13.990590     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:41:13.990590     956 round_trippers.go:580]     Content-Length: 4029
	I0807 19:41:13.990920     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700-m02","uid":"95fa38ae-e99d-47d4-a12c-06eb42d66eb3","resourceVersion":"640","creationTimestamp":"2024-08-07T19:41:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_07T19_41_08_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:41:07Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3005 chars]
	I0807 19:41:14.488228     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700-m02
	I0807 19:41:14.488329     956 round_trippers.go:469] Request Headers:
	I0807 19:41:14.488329     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:41:14.488329     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:41:14.491078     956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 19:41:14.491078     956 round_trippers.go:577] Response Headers:
	I0807 19:41:14.491078     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:41:14 GMT
	I0807 19:41:14.491805     956 round_trippers.go:580]     Audit-Id: 7e239415-1488-413f-8e72-f471f76b3dd2
	I0807 19:41:14.491805     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:41:14.491805     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:41:14.491805     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:41:14.491805     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:41:14.491805     956 round_trippers.go:580]     Content-Length: 4029
	I0807 19:41:14.492044     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700-m02","uid":"95fa38ae-e99d-47d4-a12c-06eb42d66eb3","resourceVersion":"640","creationTimestamp":"2024-08-07T19:41:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_07T19_41_08_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:41:07Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3005 chars]
	I0807 19:41:14.492288     956 node_ready.go:53] node "multinode-116700-m02" has status "Ready":"False"
	I0807 19:41:14.976918     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700-m02
	I0807 19:41:14.977073     956 round_trippers.go:469] Request Headers:
	I0807 19:41:14.977126     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:41:14.977126     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:41:14.980710     956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 19:41:14.980710     956 round_trippers.go:577] Response Headers:
	I0807 19:41:14.980936     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:41:15 GMT
	I0807 19:41:14.980936     956 round_trippers.go:580]     Audit-Id: f2a31e0e-8274-410b-a4a6-d970b2599e56
	I0807 19:41:14.980936     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:41:14.980936     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:41:14.980936     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:41:14.980936     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:41:14.980936     956 round_trippers.go:580]     Content-Length: 4029
	I0807 19:41:14.981015     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700-m02","uid":"95fa38ae-e99d-47d4-a12c-06eb42d66eb3","resourceVersion":"640","creationTimestamp":"2024-08-07T19:41:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_07T19_41_08_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:41:07Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3005 chars]
	I0807 19:41:15.481461     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700-m02
	I0807 19:41:15.481461     956 round_trippers.go:469] Request Headers:
	I0807 19:41:15.481461     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:41:15.481461     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:41:15.485091     956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 19:41:15.485091     956 round_trippers.go:577] Response Headers:
	I0807 19:41:15.485091     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:41:15.485091     956 round_trippers.go:580]     Content-Length: 4029
	I0807 19:41:15.485091     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:41:15 GMT
	I0807 19:41:15.485091     956 round_trippers.go:580]     Audit-Id: ac11f3f8-0d77-4d49-ae10-e8ff5b920906
	I0807 19:41:15.485091     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:41:15.485091     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:41:15.485091     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:41:15.485091     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700-m02","uid":"95fa38ae-e99d-47d4-a12c-06eb42d66eb3","resourceVersion":"640","creationTimestamp":"2024-08-07T19:41:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_07T19_41_08_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:41:07Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3005 chars]
	I0807 19:41:15.987671     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700-m02
	I0807 19:41:15.987889     956 round_trippers.go:469] Request Headers:
	I0807 19:41:15.987889     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:41:15.987889     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:41:15.992323     956 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 19:41:15.992471     956 round_trippers.go:577] Response Headers:
	I0807 19:41:15.992471     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:41:15.992471     956 round_trippers.go:580]     Content-Length: 4029
	I0807 19:41:15.992471     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:41:16 GMT
	I0807 19:41:15.992471     956 round_trippers.go:580]     Audit-Id: 17707756-3664-456c-b0ae-fc3edb47dc30
	I0807 19:41:15.992471     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:41:15.992471     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:41:15.992471     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:41:15.992664     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700-m02","uid":"95fa38ae-e99d-47d4-a12c-06eb42d66eb3","resourceVersion":"640","creationTimestamp":"2024-08-07T19:41:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_07T19_41_08_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:41:07Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3005 chars]
	I0807 19:41:16.476675     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700-m02
	I0807 19:41:16.476899     956 round_trippers.go:469] Request Headers:
	I0807 19:41:16.476899     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:41:16.476899     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:41:16.481038     956 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 19:41:16.481141     956 round_trippers.go:577] Response Headers:
	I0807 19:41:16.481141     956 round_trippers.go:580]     Content-Length: 4029
	I0807 19:41:16.481141     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:41:16 GMT
	I0807 19:41:16.481141     956 round_trippers.go:580]     Audit-Id: 46d319ba-e674-4dca-b641-f2cb5bc55287
	I0807 19:41:16.481141     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:41:16.481141     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:41:16.481141     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:41:16.481141     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:41:16.481404     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700-m02","uid":"95fa38ae-e99d-47d4-a12c-06eb42d66eb3","resourceVersion":"640","creationTimestamp":"2024-08-07T19:41:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_07T19_41_08_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:41:07Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3005 chars]
	I0807 19:41:16.981626     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700-m02
	I0807 19:41:16.981914     956 round_trippers.go:469] Request Headers:
	I0807 19:41:16.981914     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:41:16.981914     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:41:16.985316     956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 19:41:16.985316     956 round_trippers.go:577] Response Headers:
	I0807 19:41:16.986239     956 round_trippers.go:580]     Content-Length: 4029
	I0807 19:41:16.986239     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:41:17 GMT
	I0807 19:41:16.986239     956 round_trippers.go:580]     Audit-Id: feb2902d-937f-40e8-b06d-14164ceb7623
	I0807 19:41:16.986239     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:41:16.986239     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:41:16.986239     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:41:16.986239     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:41:16.986421     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700-m02","uid":"95fa38ae-e99d-47d4-a12c-06eb42d66eb3","resourceVersion":"640","creationTimestamp":"2024-08-07T19:41:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_07T19_41_08_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:41:07Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3005 chars]
	I0807 19:41:16.986874     956 node_ready.go:53] node "multinode-116700-m02" has status "Ready":"False"
	I0807 19:41:17.474406     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700-m02
	I0807 19:41:17.474459     956 round_trippers.go:469] Request Headers:
	I0807 19:41:17.474459     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:41:17.474459     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:41:17.479064     956 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 19:41:17.479064     956 round_trippers.go:577] Response Headers:
	I0807 19:41:17.479064     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:41:17.479064     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:41:17.479064     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:41:17.479064     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:41:17.479064     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:41:17 GMT
	I0807 19:41:17.479369     956 round_trippers.go:580]     Audit-Id: bca3f620-a91a-4ff9-8e28-8078d58473ab
	I0807 19:41:17.479733     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700-m02","uid":"95fa38ae-e99d-47d4-a12c-06eb42d66eb3","resourceVersion":"653","creationTimestamp":"2024-08-07T19:41:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_07T19_41_08_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:41:07Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3397 chars]
	I0807 19:41:17.980057     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700-m02
	I0807 19:41:17.980057     956 round_trippers.go:469] Request Headers:
	I0807 19:41:17.980057     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:41:17.980057     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:41:17.983905     956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 19:41:17.983905     956 round_trippers.go:577] Response Headers:
	I0807 19:41:17.983905     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:41:17.983905     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:41:17.983905     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:41:17.983905     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:41:17.983905     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:41:18 GMT
	I0807 19:41:17.983905     956 round_trippers.go:580]     Audit-Id: b36c8968-a95e-4750-a862-7c6bd51a6852
	I0807 19:41:17.985292     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700-m02","uid":"95fa38ae-e99d-47d4-a12c-06eb42d66eb3","resourceVersion":"653","creationTimestamp":"2024-08-07T19:41:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_07T19_41_08_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:41:07Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3397 chars]
	I0807 19:41:18.481175     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700-m02
	I0807 19:41:18.481175     956 round_trippers.go:469] Request Headers:
	I0807 19:41:18.481175     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:41:18.481175     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:41:18.771469     956 round_trippers.go:574] Response Status: 200 OK in 289 milliseconds
	I0807 19:41:18.771564     956 round_trippers.go:577] Response Headers:
	I0807 19:41:18.771641     956 round_trippers.go:580]     Audit-Id: a461e9fe-1990-4b2d-880f-3118b180108c
	I0807 19:41:18.771641     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:41:18.771641     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:41:18.771641     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:41:18.771641     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:41:18.771641     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:41:18 GMT
	I0807 19:41:18.771839     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700-m02","uid":"95fa38ae-e99d-47d4-a12c-06eb42d66eb3","resourceVersion":"653","creationTimestamp":"2024-08-07T19:41:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_07T19_41_08_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:41:07Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3397 chars]
	I0807 19:41:18.978631     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700-m02
	I0807 19:41:18.978849     956 round_trippers.go:469] Request Headers:
	I0807 19:41:18.978849     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:41:18.978849     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:41:18.981256     956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 19:41:18.981256     956 round_trippers.go:577] Response Headers:
	I0807 19:41:18.981594     956 round_trippers.go:580]     Audit-Id: bf244b31-3cdd-4703-be0f-cf1fc833c8fd
	I0807 19:41:18.981594     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:41:18.981594     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:41:18.981594     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:41:18.981594     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:41:18.981650     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:41:19 GMT
	I0807 19:41:18.985374     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700-m02","uid":"95fa38ae-e99d-47d4-a12c-06eb42d66eb3","resourceVersion":"653","creationTimestamp":"2024-08-07T19:41:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_07T19_41_08_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:41:07Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3397 chars]
	I0807 19:41:19.487277     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700-m02
	I0807 19:41:19.487339     956 round_trippers.go:469] Request Headers:
	I0807 19:41:19.487399     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:41:19.487399     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:41:19.489682     956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 19:41:19.489682     956 round_trippers.go:577] Response Headers:
	I0807 19:41:19.489682     956 round_trippers.go:580]     Audit-Id: 41fb0e9f-ea67-450a-9493-d22b8878392e
	I0807 19:41:19.489682     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:41:19.489682     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:41:19.489682     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:41:19.489682     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:41:19.489682     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:41:19 GMT
	I0807 19:41:19.489682     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700-m02","uid":"95fa38ae-e99d-47d4-a12c-06eb42d66eb3","resourceVersion":"653","creationTimestamp":"2024-08-07T19:41:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_07T19_41_08_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:41:07Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3397 chars]
	I0807 19:41:19.489682     956 node_ready.go:53] node "multinode-116700-m02" has status "Ready":"False"
	I0807 19:41:19.977260     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700-m02
	I0807 19:41:19.977260     956 round_trippers.go:469] Request Headers:
	I0807 19:41:19.977260     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:41:19.977260     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:41:20.114838     956 round_trippers.go:574] Response Status: 200 OK in 137 milliseconds
	I0807 19:41:20.114838     956 round_trippers.go:577] Response Headers:
	I0807 19:41:20.115157     956 round_trippers.go:580]     Audit-Id: 68852926-4b5a-4231-a3e8-930cd2256974
	I0807 19:41:20.115157     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:41:20.115157     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:41:20.115157     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:41:20.115157     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:41:20.115157     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:41:20 GMT
	I0807 19:41:20.115576     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700-m02","uid":"95fa38ae-e99d-47d4-a12c-06eb42d66eb3","resourceVersion":"653","creationTimestamp":"2024-08-07T19:41:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_07T19_41_08_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:41:07Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3397 chars]
	I0807 19:41:20.485960     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700-m02
	I0807 19:41:20.485960     956 round_trippers.go:469] Request Headers:
	I0807 19:41:20.485960     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:41:20.485960     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:41:20.489554     956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 19:41:20.490363     956 round_trippers.go:577] Response Headers:
	I0807 19:41:20.490363     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:41:20.490363     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:41:20.490363     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:41:20.490363     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:41:20 GMT
	I0807 19:41:20.490363     956 round_trippers.go:580]     Audit-Id: bc21d97f-6c6e-405a-be2c-f8e1dce3bf35
	I0807 19:41:20.490363     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:41:20.490737     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700-m02","uid":"95fa38ae-e99d-47d4-a12c-06eb42d66eb3","resourceVersion":"653","creationTimestamp":"2024-08-07T19:41:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_07T19_41_08_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:41:07Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3397 chars]
	I0807 19:41:20.977448     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700-m02
	I0807 19:41:20.977448     956 round_trippers.go:469] Request Headers:
	I0807 19:41:20.977448     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:41:20.977448     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:41:20.981311     956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 19:41:20.981311     956 round_trippers.go:577] Response Headers:
	I0807 19:41:20.981311     956 round_trippers.go:580]     Audit-Id: 2f636b39-c42c-44a2-986a-8e7ae15aa1a4
	I0807 19:41:20.981311     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:41:20.981311     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:41:20.981311     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:41:20.981591     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:41:20.981591     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:41:21 GMT
	I0807 19:41:20.981865     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700-m02","uid":"95fa38ae-e99d-47d4-a12c-06eb42d66eb3","resourceVersion":"653","creationTimestamp":"2024-08-07T19:41:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_07T19_41_08_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:41:07Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3397 chars]
	I0807 19:41:21.485996     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700-m02
	I0807 19:41:21.486263     956 round_trippers.go:469] Request Headers:
	I0807 19:41:21.486263     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:41:21.486263     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:41:21.490704     956 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 19:41:21.490704     956 round_trippers.go:577] Response Headers:
	I0807 19:41:21.490704     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:41:21.490704     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:41:21.490704     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:41:21 GMT
	I0807 19:41:21.490704     956 round_trippers.go:580]     Audit-Id: 4c148e0e-f9de-4f0c-8a5c-704ceae2eb72
	I0807 19:41:21.490807     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:41:21.490807     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:41:21.491025     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700-m02","uid":"95fa38ae-e99d-47d4-a12c-06eb42d66eb3","resourceVersion":"653","creationTimestamp":"2024-08-07T19:41:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_07T19_41_08_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:41:07Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3397 chars]
	I0807 19:41:21.492612     956 node_ready.go:53] node "multinode-116700-m02" has status "Ready":"False"
	I0807 19:41:21.979883     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700-m02
	I0807 19:41:21.979883     956 round_trippers.go:469] Request Headers:
	I0807 19:41:21.979883     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:41:21.979883     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:41:21.984611     956 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 19:41:21.984673     956 round_trippers.go:577] Response Headers:
	I0807 19:41:21.984673     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:41:22 GMT
	I0807 19:41:21.984673     956 round_trippers.go:580]     Audit-Id: dbc1be48-cc0c-45d1-8222-7f4e9f70acd0
	I0807 19:41:21.984673     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:41:21.984673     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:41:21.984673     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:41:21.984673     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:41:21.985472     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700-m02","uid":"95fa38ae-e99d-47d4-a12c-06eb42d66eb3","resourceVersion":"653","creationTimestamp":"2024-08-07T19:41:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_07T19_41_08_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:41:07Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3397 chars]
	I0807 19:41:22.484496     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700-m02
	I0807 19:41:22.484496     956 round_trippers.go:469] Request Headers:
	I0807 19:41:22.484496     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:41:22.484496     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:41:22.489503     956 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 19:41:22.489503     956 round_trippers.go:577] Response Headers:
	I0807 19:41:22.489503     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:41:22.489503     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:41:22.489503     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:41:22 GMT
	I0807 19:41:22.489503     956 round_trippers.go:580]     Audit-Id: 7fa2a697-9c05-4347-abf9-08e2d5037804
	I0807 19:41:22.489503     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:41:22.489503     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:41:22.489503     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700-m02","uid":"95fa38ae-e99d-47d4-a12c-06eb42d66eb3","resourceVersion":"653","creationTimestamp":"2024-08-07T19:41:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_07T19_41_08_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:41:07Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3397 chars]
	I0807 19:41:22.977607     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700-m02
	I0807 19:41:22.977607     956 round_trippers.go:469] Request Headers:
	I0807 19:41:22.977607     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:41:22.977607     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:41:22.980226     956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 19:41:22.980226     956 round_trippers.go:577] Response Headers:
	I0807 19:41:22.980226     956 round_trippers.go:580]     Audit-Id: 6bbcfe6d-6d0b-4975-9fce-4942145e1c7d
	I0807 19:41:22.980226     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:41:22.980226     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:41:22.980226     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:41:22.980624     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:41:22.980624     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:41:23 GMT
	I0807 19:41:22.980842     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700-m02","uid":"95fa38ae-e99d-47d4-a12c-06eb42d66eb3","resourceVersion":"653","creationTimestamp":"2024-08-07T19:41:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_07T19_41_08_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:41:07Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3397 chars]
	I0807 19:41:23.484549     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700-m02
	I0807 19:41:23.484549     956 round_trippers.go:469] Request Headers:
	I0807 19:41:23.484549     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:41:23.484549     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:41:23.488161     956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 19:41:23.488161     956 round_trippers.go:577] Response Headers:
	I0807 19:41:23.488161     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:41:23 GMT
	I0807 19:41:23.488161     956 round_trippers.go:580]     Audit-Id: 274f295c-9db1-4619-9e1d-cc16bacdd343
	I0807 19:41:23.488790     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:41:23.488790     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:41:23.488790     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:41:23.488790     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:41:23.489108     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700-m02","uid":"95fa38ae-e99d-47d4-a12c-06eb42d66eb3","resourceVersion":"653","creationTimestamp":"2024-08-07T19:41:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_07T19_41_08_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:41:07Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3397 chars]
	I0807 19:41:23.984827     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700-m02
	I0807 19:41:23.984827     956 round_trippers.go:469] Request Headers:
	I0807 19:41:23.984827     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:41:23.984919     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:41:23.988886     956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 19:41:23.988886     956 round_trippers.go:577] Response Headers:
	I0807 19:41:23.988961     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:41:24 GMT
	I0807 19:41:23.988961     956 round_trippers.go:580]     Audit-Id: c616915b-c730-4fc1-ab4b-d12efcd8f8d3
	I0807 19:41:23.988961     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:41:23.988961     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:41:23.988961     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:41:23.988961     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:41:23.989227     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700-m02","uid":"95fa38ae-e99d-47d4-a12c-06eb42d66eb3","resourceVersion":"653","creationTimestamp":"2024-08-07T19:41:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_07T19_41_08_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:41:07Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3397 chars]
	I0807 19:41:23.989350     956 node_ready.go:53] node "multinode-116700-m02" has status "Ready":"False"
	I0807 19:41:24.485496     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700-m02
	I0807 19:41:24.485589     956 round_trippers.go:469] Request Headers:
	I0807 19:41:24.485589     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:41:24.485589     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:41:24.488987     956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 19:41:24.488987     956 round_trippers.go:577] Response Headers:
	I0807 19:41:24.488987     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:41:24 GMT
	I0807 19:41:24.488987     956 round_trippers.go:580]     Audit-Id: 299d3365-33b4-483a-bc93-652f7cb1742a
	I0807 19:41:24.488987     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:41:24.488987     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:41:24.489422     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:41:24.489422     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:41:24.490008     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700-m02","uid":"95fa38ae-e99d-47d4-a12c-06eb42d66eb3","resourceVersion":"653","creationTimestamp":"2024-08-07T19:41:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_07T19_41_08_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:41:07Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3397 chars]
	I0807 19:41:24.986616     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700-m02
	I0807 19:41:24.986616     956 round_trippers.go:469] Request Headers:
	I0807 19:41:24.986616     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:41:24.986616     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:41:24.991251     956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 19:41:24.991251     956 round_trippers.go:577] Response Headers:
	I0807 19:41:24.991344     956 round_trippers.go:580]     Audit-Id: 00aa6e15-a871-4688-88ea-2025bb66c881
	I0807 19:41:24.991344     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:41:24.991344     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:41:24.991344     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:41:24.991344     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:41:24.991344     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:41:25 GMT
	I0807 19:41:24.991583     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700-m02","uid":"95fa38ae-e99d-47d4-a12c-06eb42d66eb3","resourceVersion":"653","creationTimestamp":"2024-08-07T19:41:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_07T19_41_08_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:41:07Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3397 chars]
	I0807 19:41:25.474114     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700-m02
	I0807 19:41:25.474114     956 round_trippers.go:469] Request Headers:
	I0807 19:41:25.474114     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:41:25.474192     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:41:25.477473     956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 19:41:25.477473     956 round_trippers.go:577] Response Headers:
	I0807 19:41:25.477473     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:41:25.477473     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:41:25.477473     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:41:25.477473     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:41:25 GMT
	I0807 19:41:25.477473     956 round_trippers.go:580]     Audit-Id: 516c7c9a-9bac-4635-8767-08653a653ba4
	I0807 19:41:25.478269     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:41:25.478669     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700-m02","uid":"95fa38ae-e99d-47d4-a12c-06eb42d66eb3","resourceVersion":"653","creationTimestamp":"2024-08-07T19:41:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_07T19_41_08_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:41:07Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3397 chars]
	I0807 19:41:25.987475     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700-m02
	I0807 19:41:25.987554     956 round_trippers.go:469] Request Headers:
	I0807 19:41:25.987554     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:41:25.987554     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:41:25.990512     956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 19:41:25.991181     956 round_trippers.go:577] Response Headers:
	I0807 19:41:25.991181     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:41:25.991181     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:41:26 GMT
	I0807 19:41:25.991274     956 round_trippers.go:580]     Audit-Id: abda8925-d664-4256-a523-b262443a1ad7
	I0807 19:41:25.991274     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:41:25.991274     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:41:25.991274     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:41:25.991510     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700-m02","uid":"95fa38ae-e99d-47d4-a12c-06eb42d66eb3","resourceVersion":"653","creationTimestamp":"2024-08-07T19:41:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_07T19_41_08_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:41:07Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3397 chars]
	I0807 19:41:25.991671     956 node_ready.go:53] node "multinode-116700-m02" has status "Ready":"False"
	I0807 19:41:26.484207     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700-m02
	I0807 19:41:26.484207     956 round_trippers.go:469] Request Headers:
	I0807 19:41:26.484207     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:41:26.484207     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:41:26.488689     956 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 19:41:26.488947     956 round_trippers.go:577] Response Headers:
	I0807 19:41:26.488947     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:41:26 GMT
	I0807 19:41:26.488947     956 round_trippers.go:580]     Audit-Id: c2fc341e-69e5-4024-8a9c-fb2a96a49c2e
	I0807 19:41:26.488947     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:41:26.488947     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:41:26.488947     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:41:26.488947     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:41:26.489510     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700-m02","uid":"95fa38ae-e99d-47d4-a12c-06eb42d66eb3","resourceVersion":"653","creationTimestamp":"2024-08-07T19:41:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_07T19_41_08_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:41:07Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3397 chars]
	I0807 19:41:26.983478     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700-m02
	I0807 19:41:26.983546     956 round_trippers.go:469] Request Headers:
	I0807 19:41:26.983546     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:41:26.983546     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:41:26.989452     956 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 19:41:26.989452     956 round_trippers.go:577] Response Headers:
	I0807 19:41:26.989452     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:41:26.989452     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:41:26.989452     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:41:27 GMT
	I0807 19:41:26.989452     956 round_trippers.go:580]     Audit-Id: dcef0e64-0840-4f55-9be4-1fddbe977afc
	I0807 19:41:26.989452     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:41:26.989452     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:41:26.990677     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700-m02","uid":"95fa38ae-e99d-47d4-a12c-06eb42d66eb3","resourceVersion":"653","creationTimestamp":"2024-08-07T19:41:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_07T19_41_08_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:41:07Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3397 chars]
	I0807 19:41:27.484777     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700-m02
	I0807 19:41:27.484994     956 round_trippers.go:469] Request Headers:
	I0807 19:41:27.484994     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:41:27.484994     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:41:27.487742     956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 19:41:27.487742     956 round_trippers.go:577] Response Headers:
	I0807 19:41:27.488595     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:41:27.488595     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:41:27 GMT
	I0807 19:41:27.488595     956 round_trippers.go:580]     Audit-Id: ff72e420-080d-4586-a9fc-b4fda73eeebc
	I0807 19:41:27.488595     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:41:27.488595     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:41:27.488595     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:41:27.488858     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700-m02","uid":"95fa38ae-e99d-47d4-a12c-06eb42d66eb3","resourceVersion":"653","creationTimestamp":"2024-08-07T19:41:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_07T19_41_08_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:41:07Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3397 chars]
	I0807 19:41:27.987021     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700-m02
	I0807 19:41:27.987082     956 round_trippers.go:469] Request Headers:
	I0807 19:41:27.987082     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:41:27.987082     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:41:27.993727     956 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0807 19:41:27.993727     956 round_trippers.go:577] Response Headers:
	I0807 19:41:27.993727     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:41:27.993727     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:41:27.993727     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:41:28 GMT
	I0807 19:41:27.993814     956 round_trippers.go:580]     Audit-Id: 97bbe366-dae3-4b05-8a59-b38e943b8d09
	I0807 19:41:27.993814     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:41:27.993814     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:41:27.994069     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700-m02","uid":"95fa38ae-e99d-47d4-a12c-06eb42d66eb3","resourceVersion":"653","creationTimestamp":"2024-08-07T19:41:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_07T19_41_08_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:41:07Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3397 chars]
	I0807 19:41:27.994603     956 node_ready.go:53] node "multinode-116700-m02" has status "Ready":"False"
	I0807 19:41:28.486513     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700-m02
	I0807 19:41:28.486513     956 round_trippers.go:469] Request Headers:
	I0807 19:41:28.486513     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:41:28.486513     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:41:28.492591     956 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0807 19:41:28.492591     956 round_trippers.go:577] Response Headers:
	I0807 19:41:28.492591     956 round_trippers.go:580]     Audit-Id: fe8aae12-eec6-421c-8beb-7f7bfbe64ecd
	I0807 19:41:28.492591     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:41:28.492591     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:41:28.492591     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:41:28.492591     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:41:28.492591     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:41:28 GMT
	I0807 19:41:28.493155     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700-m02","uid":"95fa38ae-e99d-47d4-a12c-06eb42d66eb3","resourceVersion":"653","creationTimestamp":"2024-08-07T19:41:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_07T19_41_08_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:41:07Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3397 chars]
	I0807 19:41:28.984154     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700-m02
	I0807 19:41:28.984154     956 round_trippers.go:469] Request Headers:
	I0807 19:41:28.984154     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:41:28.984154     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:41:28.986799     956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 19:41:28.986799     956 round_trippers.go:577] Response Headers:
	I0807 19:41:28.987650     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:41:28.987650     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:41:28.987650     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:41:28.987650     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:41:29 GMT
	I0807 19:41:28.987650     956 round_trippers.go:580]     Audit-Id: 73d6a12b-b1b5-49ff-a3bd-a6345f16a7cc
	I0807 19:41:28.987650     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:41:28.988158     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700-m02","uid":"95fa38ae-e99d-47d4-a12c-06eb42d66eb3","resourceVersion":"653","creationTimestamp":"2024-08-07T19:41:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_07T19_41_08_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:41:07Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3397 chars]
	I0807 19:41:29.483842     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700-m02
	I0807 19:41:29.483842     956 round_trippers.go:469] Request Headers:
	I0807 19:41:29.483842     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:41:29.483842     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:41:29.487293     956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 19:41:29.487293     956 round_trippers.go:577] Response Headers:
	I0807 19:41:29.487293     956 round_trippers.go:580]     Audit-Id: cb44abe8-645c-4973-a585-5d3535bf397e
	I0807 19:41:29.487293     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:41:29.487293     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:41:29.487293     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:41:29.487293     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:41:29.487293     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:41:29 GMT
	I0807 19:41:29.488280     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700-m02","uid":"95fa38ae-e99d-47d4-a12c-06eb42d66eb3","resourceVersion":"653","creationTimestamp":"2024-08-07T19:41:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_07T19_41_08_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:41:07Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3397 chars]
	I0807 19:41:29.984177     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700-m02
	I0807 19:41:29.984177     956 round_trippers.go:469] Request Headers:
	I0807 19:41:29.984177     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:41:29.984177     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:41:29.987705     956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 19:41:29.987969     956 round_trippers.go:577] Response Headers:
	I0807 19:41:29.987969     956 round_trippers.go:580]     Audit-Id: 1c5b1545-9c65-4b53-80cb-a02926542efc
	I0807 19:41:29.987969     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:41:29.987969     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:41:29.987969     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:41:29.987969     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:41:29.988071     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:41:30 GMT
	I0807 19:41:29.988259     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700-m02","uid":"95fa38ae-e99d-47d4-a12c-06eb42d66eb3","resourceVersion":"653","creationTimestamp":"2024-08-07T19:41:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_07T19_41_08_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:41:07Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3397 chars]
	I0807 19:41:30.483526     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700-m02
	I0807 19:41:30.483526     956 round_trippers.go:469] Request Headers:
	I0807 19:41:30.483526     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:41:30.483526     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:41:30.487147     956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 19:41:30.487197     956 round_trippers.go:577] Response Headers:
	I0807 19:41:30.487197     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:41:30.487197     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:41:30.487197     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:41:30.487197     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:41:30 GMT
	I0807 19:41:30.487197     956 round_trippers.go:580]     Audit-Id: febd792e-60f1-4aac-ba27-ba73caa7e389
	I0807 19:41:30.487197     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:41:30.487197     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700-m02","uid":"95fa38ae-e99d-47d4-a12c-06eb42d66eb3","resourceVersion":"653","creationTimestamp":"2024-08-07T19:41:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_07T19_41_08_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:41:07Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3397 chars]
	I0807 19:41:30.488042     956 node_ready.go:53] node "multinode-116700-m02" has status "Ready":"False"
	I0807 19:41:30.982435     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700-m02
	I0807 19:41:30.982657     956 round_trippers.go:469] Request Headers:
	I0807 19:41:30.982657     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:41:30.982657     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:41:30.987373     956 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 19:41:30.987373     956 round_trippers.go:577] Response Headers:
	I0807 19:41:30.987373     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:41:30.987553     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:41:31 GMT
	I0807 19:41:30.987553     956 round_trippers.go:580]     Audit-Id: 5021223f-e848-456b-b324-b9ee843913e4
	I0807 19:41:30.987553     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:41:30.987553     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:41:30.987553     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:41:30.988794     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700-m02","uid":"95fa38ae-e99d-47d4-a12c-06eb42d66eb3","resourceVersion":"653","creationTimestamp":"2024-08-07T19:41:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_07T19_41_08_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:41:07Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3397 chars]
	I0807 19:41:31.484366     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700-m02
	I0807 19:41:31.484366     956 round_trippers.go:469] Request Headers:
	I0807 19:41:31.484443     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:41:31.484443     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:41:31.489009     956 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 19:41:31.489009     956 round_trippers.go:577] Response Headers:
	I0807 19:41:31.489009     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:41:31.489009     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:41:31.489009     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:41:31 GMT
	I0807 19:41:31.489009     956 round_trippers.go:580]     Audit-Id: 30cd729e-55a6-4de7-ae75-a8089a0de0ae
	I0807 19:41:31.489009     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:41:31.489009     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:41:31.489889     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700-m02","uid":"95fa38ae-e99d-47d4-a12c-06eb42d66eb3","resourceVersion":"653","creationTimestamp":"2024-08-07T19:41:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_07T19_41_08_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:41:07Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3397 chars]
	I0807 19:41:31.982084     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700-m02
	I0807 19:41:31.982221     956 round_trippers.go:469] Request Headers:
	I0807 19:41:31.982221     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:41:31.982221     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:41:31.985491     956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 19:41:31.985907     956 round_trippers.go:577] Response Headers:
	I0807 19:41:31.985907     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:41:31.985907     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:41:31.985907     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:41:31.985907     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:41:31.985907     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:41:32 GMT
	I0807 19:41:31.985907     956 round_trippers.go:580]     Audit-Id: dfd8c283-8cfa-4ea2-86a3-75b2e4bc985e
	I0807 19:41:31.986326     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700-m02","uid":"95fa38ae-e99d-47d4-a12c-06eb42d66eb3","resourceVersion":"653","creationTimestamp":"2024-08-07T19:41:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_07T19_41_08_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:41:07Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3397 chars]
	I0807 19:41:32.480935     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700-m02
	I0807 19:41:32.480935     956 round_trippers.go:469] Request Headers:
	I0807 19:41:32.480935     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:41:32.480935     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:41:32.484541     956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 19:41:32.485149     956 round_trippers.go:577] Response Headers:
	I0807 19:41:32.485149     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:41:32.485149     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:41:32.485149     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:41:32.485149     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:41:32 GMT
	I0807 19:41:32.485149     956 round_trippers.go:580]     Audit-Id: 89ce750c-b263-4980-a935-fed2002812d3
	I0807 19:41:32.485149     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:41:32.485434     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700-m02","uid":"95fa38ae-e99d-47d4-a12c-06eb42d66eb3","resourceVersion":"653","creationTimestamp":"2024-08-07T19:41:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_07T19_41_08_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:41:07Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3397 chars]
	I0807 19:41:32.976606     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700-m02
	I0807 19:41:32.976606     956 round_trippers.go:469] Request Headers:
	I0807 19:41:32.976606     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:41:32.976606     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:41:32.979725     956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 19:41:32.980636     956 round_trippers.go:577] Response Headers:
	I0807 19:41:32.980636     956 round_trippers.go:580]     Audit-Id: c8d4bca7-ecae-4fb0-898d-9f7034b366b5
	I0807 19:41:32.980636     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:41:32.980636     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:41:32.980636     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:41:32.980636     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:41:32.980636     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:41:33 GMT
	I0807 19:41:32.980636     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700-m02","uid":"95fa38ae-e99d-47d4-a12c-06eb42d66eb3","resourceVersion":"653","creationTimestamp":"2024-08-07T19:41:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_07T19_41_08_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:41:07Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3397 chars]
	I0807 19:41:32.981511     956 node_ready.go:53] node "multinode-116700-m02" has status "Ready":"False"
	I0807 19:41:33.475217     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700-m02
	I0807 19:41:33.475355     956 round_trippers.go:469] Request Headers:
	I0807 19:41:33.475355     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:41:33.475355     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:41:33.482403     956 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0807 19:41:33.482403     956 round_trippers.go:577] Response Headers:
	I0807 19:41:33.482403     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:41:33.482403     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:41:33.482403     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:41:33.482403     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:41:33 GMT
	I0807 19:41:33.482403     956 round_trippers.go:580]     Audit-Id: b7f5712c-a67f-4c6a-897c-684f297c2101
	I0807 19:41:33.482403     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:41:33.482403     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700-m02","uid":"95fa38ae-e99d-47d4-a12c-06eb42d66eb3","resourceVersion":"653","creationTimestamp":"2024-08-07T19:41:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_07T19_41_08_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:41:07Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3397 chars]
	I0807 19:41:33.975770     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700-m02
	I0807 19:41:33.975770     956 round_trippers.go:469] Request Headers:
	I0807 19:41:33.975770     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:41:33.975872     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:41:33.979281     956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 19:41:33.979281     956 round_trippers.go:577] Response Headers:
	I0807 19:41:33.979281     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:41:33.979838     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:41:33.979838     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:41:34 GMT
	I0807 19:41:33.979838     956 round_trippers.go:580]     Audit-Id: 8b8dc0f0-4929-439d-981a-3a06205d4c9d
	I0807 19:41:33.979838     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:41:33.979838     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:41:33.980063     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700-m02","uid":"95fa38ae-e99d-47d4-a12c-06eb42d66eb3","resourceVersion":"653","creationTimestamp":"2024-08-07T19:41:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_07T19_41_08_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:41:07Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3397 chars]
	I0807 19:41:34.488597     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700-m02
	I0807 19:41:34.488693     956 round_trippers.go:469] Request Headers:
	I0807 19:41:34.488693     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:41:34.488693     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:41:34.495291     956 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0807 19:41:34.495291     956 round_trippers.go:577] Response Headers:
	I0807 19:41:34.495291     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:41:34.495291     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:41:34 GMT
	I0807 19:41:34.495291     956 round_trippers.go:580]     Audit-Id: ae561b35-fe0a-4adf-889c-e5f1238a9d9b
	I0807 19:41:34.495291     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:41:34.495291     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:41:34.495291     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:41:34.496088     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700-m02","uid":"95fa38ae-e99d-47d4-a12c-06eb42d66eb3","resourceVersion":"653","creationTimestamp":"2024-08-07T19:41:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_07T19_41_08_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:41:07Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3397 chars]
	I0807 19:41:34.983478     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700-m02
	I0807 19:41:34.983831     956 round_trippers.go:469] Request Headers:
	I0807 19:41:34.983831     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:41:34.983831     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:41:34.987949     956 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 19:41:34.987949     956 round_trippers.go:577] Response Headers:
	I0807 19:41:34.987949     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:41:34.987949     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:41:35 GMT
	I0807 19:41:34.987949     956 round_trippers.go:580]     Audit-Id: 65d9e691-62f1-43bb-a1ec-9083941e0915
	I0807 19:41:34.987949     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:41:34.987949     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:41:34.987949     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:41:34.988967     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700-m02","uid":"95fa38ae-e99d-47d4-a12c-06eb42d66eb3","resourceVersion":"653","creationTimestamp":"2024-08-07T19:41:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_07T19_41_08_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:41:07Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3397 chars]
	I0807 19:41:34.989185     956 node_ready.go:53] node "multinode-116700-m02" has status "Ready":"False"
	I0807 19:41:35.483411     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700-m02
	I0807 19:41:35.483515     956 round_trippers.go:469] Request Headers:
	I0807 19:41:35.483515     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:41:35.483515     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:41:35.487490     956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 19:41:35.487490     956 round_trippers.go:577] Response Headers:
	I0807 19:41:35.487490     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:41:35.487490     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:41:35.487490     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:41:35.487490     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:41:35 GMT
	I0807 19:41:35.487490     956 round_trippers.go:580]     Audit-Id: 15988c99-5f95-4054-935d-4d9bbfcfeb67
	I0807 19:41:35.487490     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:41:35.487765     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700-m02","uid":"95fa38ae-e99d-47d4-a12c-06eb42d66eb3","resourceVersion":"653","creationTimestamp":"2024-08-07T19:41:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_07T19_41_08_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:41:07Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3397 chars]
	I0807 19:41:35.983453     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700-m02
	I0807 19:41:35.983738     956 round_trippers.go:469] Request Headers:
	I0807 19:41:35.983738     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:41:35.983738     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:41:35.987210     956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 19:41:35.987210     956 round_trippers.go:577] Response Headers:
	I0807 19:41:35.987210     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:41:35.987210     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:41:35.988209     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:41:35.988209     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:41:35.988209     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:41:36 GMT
	I0807 19:41:35.988209     956 round_trippers.go:580]     Audit-Id: e3a0e53c-0c81-4db4-a225-a153edd41755
	I0807 19:41:35.988448     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700-m02","uid":"95fa38ae-e99d-47d4-a12c-06eb42d66eb3","resourceVersion":"653","creationTimestamp":"2024-08-07T19:41:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_07T19_41_08_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:41:07Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3397 chars]
	I0807 19:41:36.483403     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700-m02
	I0807 19:41:36.483403     956 round_trippers.go:469] Request Headers:
	I0807 19:41:36.483403     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:41:36.483403     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:41:36.487154     956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 19:41:36.487511     956 round_trippers.go:577] Response Headers:
	I0807 19:41:36.487511     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:41:36.487511     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:41:36.487511     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:41:36.487511     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:41:36.487511     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:41:36 GMT
	I0807 19:41:36.487511     956 round_trippers.go:580]     Audit-Id: 8d9d17a5-cfff-49af-a2f1-5b217b0ecc9b
	I0807 19:41:36.487780     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700-m02","uid":"95fa38ae-e99d-47d4-a12c-06eb42d66eb3","resourceVersion":"653","creationTimestamp":"2024-08-07T19:41:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_07T19_41_08_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:41:07Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3397 chars]
	I0807 19:41:36.982665     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700-m02
	I0807 19:41:36.982665     956 round_trippers.go:469] Request Headers:
	I0807 19:41:36.982665     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:41:36.982665     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:41:36.986309     956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 19:41:36.986309     956 round_trippers.go:577] Response Headers:
	I0807 19:41:36.986309     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:41:36.986309     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:41:36.986309     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:41:36.986309     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:41:36.986309     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:41:37 GMT
	I0807 19:41:36.986309     956 round_trippers.go:580]     Audit-Id: 910c8ebc-3bb3-47af-918a-1a9cff30e74e
	I0807 19:41:36.987214     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700-m02","uid":"95fa38ae-e99d-47d4-a12c-06eb42d66eb3","resourceVersion":"653","creationTimestamp":"2024-08-07T19:41:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_07T19_41_08_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:41:07Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3397 chars]
	I0807 19:41:37.480295     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700-m02
	I0807 19:41:37.480538     956 round_trippers.go:469] Request Headers:
	I0807 19:41:37.480538     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:41:37.480538     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:41:37.483958     956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 19:41:37.483958     956 round_trippers.go:577] Response Headers:
	I0807 19:41:37.483958     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:41:37.483958     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:41:37.483958     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:41:37.484162     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:41:37.484162     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:41:37 GMT
	I0807 19:41:37.484162     956 round_trippers.go:580]     Audit-Id: e1a69a9b-7d60-4c00-8eef-5a2774642a14
	I0807 19:41:37.484453     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700-m02","uid":"95fa38ae-e99d-47d4-a12c-06eb42d66eb3","resourceVersion":"653","creationTimestamp":"2024-08-07T19:41:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_07T19_41_08_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:41:07Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3397 chars]
	I0807 19:41:37.484617     956 node_ready.go:53] node "multinode-116700-m02" has status "Ready":"False"
	I0807 19:41:37.981366     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700-m02
	I0807 19:41:37.981447     956 round_trippers.go:469] Request Headers:
	I0807 19:41:37.981447     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:41:37.981447     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:41:37.984865     956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 19:41:37.985625     956 round_trippers.go:577] Response Headers:
	I0807 19:41:37.985625     956 round_trippers.go:580]     Audit-Id: 380a6639-77be-4d66-ad03-020fafdd62f2
	I0807 19:41:37.985625     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:41:37.985625     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:41:37.985625     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:41:37.985728     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:41:37.985728     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:41:38 GMT
	I0807 19:41:37.985890     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700-m02","uid":"95fa38ae-e99d-47d4-a12c-06eb42d66eb3","resourceVersion":"683","creationTimestamp":"2024-08-07T19:41:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_07T19_41_08_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:41:07Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3910 chars]
	I0807 19:41:38.481617     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700-m02
	I0807 19:41:38.481617     956 round_trippers.go:469] Request Headers:
	I0807 19:41:38.481617     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:41:38.481617     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:41:38.485184     956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 19:41:38.486214     956 round_trippers.go:577] Response Headers:
	I0807 19:41:38.486214     956 round_trippers.go:580]     Audit-Id: a7310ccf-fea7-4f4a-953a-28696123a151
	I0807 19:41:38.486269     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:41:38.486269     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:41:38.486269     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:41:38.486269     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:41:38.486269     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:41:38 GMT
	I0807 19:41:38.486507     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700-m02","uid":"95fa38ae-e99d-47d4-a12c-06eb42d66eb3","resourceVersion":"683","creationTimestamp":"2024-08-07T19:41:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_07T19_41_08_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:41:07Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3910 chars]
	I0807 19:41:38.982300     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700-m02
	I0807 19:41:38.982505     956 round_trippers.go:469] Request Headers:
	I0807 19:41:38.982505     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:41:38.982505     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:41:38.988634     956 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 19:41:38.988696     956 round_trippers.go:577] Response Headers:
	I0807 19:41:38.988745     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:41:38.988745     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:41:39 GMT
	I0807 19:41:38.988745     956 round_trippers.go:580]     Audit-Id: 8f49e51e-1bf6-438c-88e1-c6dee5a69da0
	I0807 19:41:38.988745     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:41:38.988745     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:41:38.988745     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:41:38.988850     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700-m02","uid":"95fa38ae-e99d-47d4-a12c-06eb42d66eb3","resourceVersion":"683","creationTimestamp":"2024-08-07T19:41:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_07T19_41_08_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:41:07Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3910 chars]
	I0807 19:41:39.480688     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700-m02
	I0807 19:41:39.480688     956 round_trippers.go:469] Request Headers:
	I0807 19:41:39.480688     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:41:39.480688     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:41:39.484334     956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 19:41:39.484334     956 round_trippers.go:577] Response Headers:
	I0807 19:41:39.484334     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:41:39.484334     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:41:39 GMT
	I0807 19:41:39.484334     956 round_trippers.go:580]     Audit-Id: ff454926-e4b3-45e8-bc1b-d4be48744bad
	I0807 19:41:39.484334     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:41:39.484334     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:41:39.484334     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:41:39.485442     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700-m02","uid":"95fa38ae-e99d-47d4-a12c-06eb42d66eb3","resourceVersion":"683","creationTimestamp":"2024-08-07T19:41:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_07T19_41_08_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:41:07Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3910 chars]
	I0807 19:41:39.485958     956 node_ready.go:53] node "multinode-116700-m02" has status "Ready":"False"
	I0807 19:41:39.980152     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700-m02
	I0807 19:41:39.980397     956 round_trippers.go:469] Request Headers:
	I0807 19:41:39.980397     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:41:39.980397     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:41:39.984464     956 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 19:41:39.984464     956 round_trippers.go:577] Response Headers:
	I0807 19:41:39.984464     956 round_trippers.go:580]     Audit-Id: 79f00d01-4447-4dd2-9bb8-b0fbd4960c33
	I0807 19:41:39.984464     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:41:39.984607     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:41:39.984607     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:41:39.984607     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:41:39.984607     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:41:40 GMT
	I0807 19:41:39.985056     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700-m02","uid":"95fa38ae-e99d-47d4-a12c-06eb42d66eb3","resourceVersion":"683","creationTimestamp":"2024-08-07T19:41:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_07T19_41_08_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:41:07Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3910 chars]
	I0807 19:41:40.478986     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700-m02
	I0807 19:41:40.478986     956 round_trippers.go:469] Request Headers:
	I0807 19:41:40.478986     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:41:40.478986     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:41:40.482591     956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 19:41:40.482591     956 round_trippers.go:577] Response Headers:
	I0807 19:41:40.483326     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:41:40.483326     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:41:40.483326     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:41:40.483326     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:41:40.483326     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:41:40 GMT
	I0807 19:41:40.483326     956 round_trippers.go:580]     Audit-Id: a79f4c85-f6b5-4b5d-9bd7-199f6e42f261
	I0807 19:41:40.483779     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700-m02","uid":"95fa38ae-e99d-47d4-a12c-06eb42d66eb3","resourceVersion":"683","creationTimestamp":"2024-08-07T19:41:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_07T19_41_08_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:41:07Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3910 chars]
	I0807 19:41:40.982230     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700-m02
	I0807 19:41:40.982319     956 round_trippers.go:469] Request Headers:
	I0807 19:41:40.982319     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:41:40.982319     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:41:40.986055     956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 19:41:40.986055     956 round_trippers.go:577] Response Headers:
	I0807 19:41:40.986055     956 round_trippers.go:580]     Audit-Id: 378602a9-5034-49b4-8c64-33816b0366c1
	I0807 19:41:40.986786     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:41:40.986786     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:41:40.986786     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:41:40.986786     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:41:40.986786     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:41:41 GMT
	I0807 19:41:40.987416     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700-m02","uid":"95fa38ae-e99d-47d4-a12c-06eb42d66eb3","resourceVersion":"689","creationTimestamp":"2024-08-07T19:41:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_07T19_41_08_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:41:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3776 chars]
	I0807 19:41:40.987667     956 node_ready.go:49] node "multinode-116700-m02" has status "Ready":"True"
	I0807 19:41:40.987667     956 node_ready.go:38] duration metric: took 32.5144095s for node "multinode-116700-m02" to be "Ready" ...
	I0807 19:41:40.987667     956 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0807 19:41:40.987667     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/namespaces/kube-system/pods
	I0807 19:41:40.987667     956 round_trippers.go:469] Request Headers:
	I0807 19:41:40.987667     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:41:40.987667     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:41:40.992611     956 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 19:41:40.993322     956 round_trippers.go:577] Response Headers:
	I0807 19:41:40.993322     956 round_trippers.go:580]     Audit-Id: 25ffb950-1762-4f4c-8051-285131b3c0c0
	I0807 19:41:40.993322     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:41:40.993322     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:41:40.993322     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:41:40.993322     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:41:40.993322     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:41:41 GMT
	I0807 19:41:40.994913     956 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"689"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-7l6v2","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7de73f9c-93d9-46c6-ae10-b253dd257a19","resourceVersion":"467","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"26f3775d-afda-4408-b967-dc333bdd23fc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26f3775d-afda-4408-b967-dc333bdd23fc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 70428 chars]
	I0807 19:41:40.998361     956 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-7l6v2" in "kube-system" namespace to be "Ready" ...
	I0807 19:41:40.998488     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7l6v2
	I0807 19:41:40.998609     956 round_trippers.go:469] Request Headers:
	I0807 19:41:40.998609     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:41:40.998609     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:41:41.003234     956 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 19:41:41.003234     956 round_trippers.go:577] Response Headers:
	I0807 19:41:41.003234     956 round_trippers.go:580]     Audit-Id: ddc7b6da-770c-4074-b5f5-976ed2b67963
	I0807 19:41:41.003234     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:41:41.003234     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:41:41.003234     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:41:41.003234     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:41:41.003234     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:41:41 GMT
	I0807 19:41:41.003911     956 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7l6v2","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7de73f9c-93d9-46c6-ae10-b253dd257a19","resourceVersion":"467","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"26f3775d-afda-4408-b967-dc333bdd23fc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26f3775d-afda-4408-b967-dc333bdd23fc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6578 chars]
	I0807 19:41:41.003911     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700
	I0807 19:41:41.003911     956 round_trippers.go:469] Request Headers:
	I0807 19:41:41.003911     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:41:41.003911     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:41:41.006953     956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 19:41:41.006953     956 round_trippers.go:577] Response Headers:
	I0807 19:41:41.006953     956 round_trippers.go:580]     Audit-Id: b68f1fb8-8823-42fb-b403-037032a71294
	I0807 19:41:41.006953     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:41:41.006953     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:41:41.006953     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:41:41.006953     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:41:41.006953     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:41:41 GMT
	I0807 19:41:41.008183     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"448","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0807 19:41:41.008183     956 pod_ready.go:92] pod "coredns-7db6d8ff4d-7l6v2" in "kube-system" namespace has status "Ready":"True"
	I0807 19:41:41.008183     956 pod_ready.go:81] duration metric: took 9.8219ms for pod "coredns-7db6d8ff4d-7l6v2" in "kube-system" namespace to be "Ready" ...
	I0807 19:41:41.008183     956 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-116700" in "kube-system" namespace to be "Ready" ...
	I0807 19:41:41.008183     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-116700
	I0807 19:41:41.008183     956 round_trippers.go:469] Request Headers:
	I0807 19:41:41.008183     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:41:41.008183     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:41:41.010796     956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 19:41:41.010796     956 round_trippers.go:577] Response Headers:
	I0807 19:41:41.011526     956 round_trippers.go:580]     Audit-Id: 2978e84a-238a-465d-bb17-0914d259d1da
	I0807 19:41:41.011526     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:41:41.011526     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:41:41.011526     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:41:41.011526     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:41:41.011526     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:41:41 GMT
	I0807 19:41:41.011526     956 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-116700","namespace":"kube-system","uid":"fbae8778-c573-4d9b-a21e-e5fcb236586e","resourceVersion":"425","creationTimestamp":"2024-08-07T19:37:39Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.28.224.86:2379","kubernetes.io/config.hash":"7ac46b48ad876a3a598d6eacbc5ad1fe","kubernetes.io/config.mirror":"7ac46b48ad876a3a598d6eacbc5ad1fe","kubernetes.io/config.seen":"2024-08-07T19:37:39.552052160Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6159 chars]
	I0807 19:41:41.012107     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700
	I0807 19:41:41.012107     956 round_trippers.go:469] Request Headers:
	I0807 19:41:41.012107     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:41:41.012107     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:41:41.014545     956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 19:41:41.014922     956 round_trippers.go:577] Response Headers:
	I0807 19:41:41.014922     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:41:41.014922     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:41:41 GMT
	I0807 19:41:41.014991     956 round_trippers.go:580]     Audit-Id: 529ff1e3-c713-4419-9da0-ce126e2ec4bc
	I0807 19:41:41.014991     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:41:41.014991     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:41:41.014991     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:41:41.014991     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"448","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0807 19:41:41.015956     956 pod_ready.go:92] pod "etcd-multinode-116700" in "kube-system" namespace has status "Ready":"True"
	I0807 19:41:41.015956     956 pod_ready.go:81] duration metric: took 7.7734ms for pod "etcd-multinode-116700" in "kube-system" namespace to be "Ready" ...
	I0807 19:41:41.015956     956 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-116700" in "kube-system" namespace to be "Ready" ...
	I0807 19:41:41.015956     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-116700
	I0807 19:41:41.015956     956 round_trippers.go:469] Request Headers:
	I0807 19:41:41.015956     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:41:41.015956     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:41:41.018746     956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 19:41:41.018746     956 round_trippers.go:577] Response Headers:
	I0807 19:41:41.018746     956 round_trippers.go:580]     Audit-Id: d4740705-7d9a-4371-9186-506ec821fabc
	I0807 19:41:41.018746     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:41:41.018746     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:41:41.018746     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:41:41.018746     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:41:41.018746     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:41:41 GMT
	I0807 19:41:41.019744     956 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-116700","namespace":"kube-system","uid":"6a7e36c1-9e53-4565-9998-c5bbbb1ea060","resourceVersion":"426","creationTimestamp":"2024-08-07T19:37:38Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.28.224.86:8443","kubernetes.io/config.hash":"f7d89d0655264a3dfa6358b49d3d5f42","kubernetes.io/config.mirror":"f7d89d0655264a3dfa6358b49d3d5f42","kubernetes.io/config.seen":"2024-08-07T19:37:31.050588290Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7694 chars]
	I0807 19:41:41.019890     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700
	I0807 19:41:41.019890     956 round_trippers.go:469] Request Headers:
	I0807 19:41:41.019890     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:41:41.019890     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:41:41.027183     956 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0807 19:41:41.027183     956 round_trippers.go:577] Response Headers:
	I0807 19:41:41.027183     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:41:41 GMT
	I0807 19:41:41.027183     956 round_trippers.go:580]     Audit-Id: 536d4ad9-3366-4dd7-9c97-a54058c3ebac
	I0807 19:41:41.027183     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:41:41.027183     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:41:41.027183     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:41:41.027183     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:41:41.027183     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"448","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0807 19:41:41.027920     956 pod_ready.go:92] pod "kube-apiserver-multinode-116700" in "kube-system" namespace has status "Ready":"True"
	I0807 19:41:41.027920     956 pod_ready.go:81] duration metric: took 11.9636ms for pod "kube-apiserver-multinode-116700" in "kube-system" namespace to be "Ready" ...
	I0807 19:41:41.027920     956 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-116700" in "kube-system" namespace to be "Ready" ...
	I0807 19:41:41.027920     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-116700
	I0807 19:41:41.027920     956 round_trippers.go:469] Request Headers:
	I0807 19:41:41.027920     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:41:41.027920     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:41:41.030241     956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 19:41:41.030241     956 round_trippers.go:577] Response Headers:
	I0807 19:41:41.030241     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:41:41 GMT
	I0807 19:41:41.031205     956 round_trippers.go:580]     Audit-Id: a0f47418-4e55-4182-8483-4155144b6e8e
	I0807 19:41:41.031205     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:41:41.031205     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:41:41.031242     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:41:41.031242     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:41:41.031242     956 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-116700","namespace":"kube-system","uid":"4d2e8250-9b12-4277-8834-515c1621fc78","resourceVersion":"423","creationTimestamp":"2024-08-07T19:37:39Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ef62d358a9b469de2443e4a4f620921d","kubernetes.io/config.mirror":"ef62d358a9b469de2443e4a4f620921d","kubernetes.io/config.seen":"2024-08-07T19:37:39.552053960Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7264 chars]
	I0807 19:41:41.032084     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700
	I0807 19:41:41.032113     956 round_trippers.go:469] Request Headers:
	I0807 19:41:41.032160     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:41:41.032160     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:41:41.034407     956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 19:41:41.034407     956 round_trippers.go:577] Response Headers:
	I0807 19:41:41.034407     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:41:41 GMT
	I0807 19:41:41.034407     956 round_trippers.go:580]     Audit-Id: b6cac337-fa16-4046-84b7-45e92c56280d
	I0807 19:41:41.034407     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:41:41.034407     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:41:41.034407     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:41:41.034407     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:41:41.035347     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"448","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0807 19:41:41.035771     956 pod_ready.go:92] pod "kube-controller-manager-multinode-116700" in "kube-system" namespace has status "Ready":"True"
	I0807 19:41:41.035771     956 pod_ready.go:81] duration metric: took 7.8515ms for pod "kube-controller-manager-multinode-116700" in "kube-system" namespace to be "Ready" ...
	I0807 19:41:41.035771     956 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fmjt9" in "kube-system" namespace to be "Ready" ...
	I0807 19:41:41.188026     956 request.go:629] Waited for 151.7455ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.86:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fmjt9
	I0807 19:41:41.188060     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fmjt9
	I0807 19:41:41.188060     956 round_trippers.go:469] Request Headers:
	I0807 19:41:41.188060     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:41:41.188060     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:41:41.191310     956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 19:41:41.191310     956 round_trippers.go:577] Response Headers:
	I0807 19:41:41.191310     956 round_trippers.go:580]     Audit-Id: 2183434b-b2ff-443a-85c5-097e9bdcf777
	I0807 19:41:41.191310     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:41:41.191310     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:41:41.191310     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:41:41.191310     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:41:41.191514     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:41:41 GMT
	I0807 19:41:41.191921     956 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-fmjt9","generateName":"kube-proxy-","namespace":"kube-system","uid":"766df91e-8fd0-457b-8c11-8810059ca4d9","resourceVersion":"419","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d82856c0-3330-4ab9-b7bf-54ed48646bce","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d82856c0-3330-4ab9-b7bf-54ed48646bce\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5828 chars]
	I0807 19:41:41.389903     956 request.go:629] Waited for 196.7293ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.86:8443/api/v1/nodes/multinode-116700
	I0807 19:41:41.390098     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700
	I0807 19:41:41.390098     956 round_trippers.go:469] Request Headers:
	I0807 19:41:41.390157     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:41:41.390157     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:41:41.393814     956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 19:41:41.393814     956 round_trippers.go:577] Response Headers:
	I0807 19:41:41.393814     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:41:41.394012     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:41:41.394012     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:41:41 GMT
	I0807 19:41:41.394012     956 round_trippers.go:580]     Audit-Id: a6c4b777-35ad-4a55-9b43-b3aa9073444b
	I0807 19:41:41.394012     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:41:41.394012     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:41:41.394339     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"448","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0807 19:41:41.394943     956 pod_ready.go:92] pod "kube-proxy-fmjt9" in "kube-system" namespace has status "Ready":"True"
	I0807 19:41:41.395010     956 pod_ready.go:81] duration metric: took 359.2338ms for pod "kube-proxy-fmjt9" in "kube-system" namespace to be "Ready" ...
	I0807 19:41:41.395010     956 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vcb7n" in "kube-system" namespace to be "Ready" ...
	I0807 19:41:41.590897     956 request.go:629] Waited for 195.6104ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.86:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vcb7n
	I0807 19:41:41.590897     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vcb7n
	I0807 19:41:41.591007     956 round_trippers.go:469] Request Headers:
	I0807 19:41:41.591007     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:41:41.591007     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:41:41.597783     956 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0807 19:41:41.597783     956 round_trippers.go:577] Response Headers:
	I0807 19:41:41.597783     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:41:41.597783     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:41:41 GMT
	I0807 19:41:41.597783     956 round_trippers.go:580]     Audit-Id: 3d18b2a8-9f20-4348-bfcd-454726f03265
	I0807 19:41:41.597783     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:41:41.597783     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:41:41.597783     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:41:41.597783     956 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-vcb7n","generateName":"kube-proxy-","namespace":"kube-system","uid":"d8d87ad6-19cc-45fa-8c9f-1a862fec4e59","resourceVersion":"661","creationTimestamp":"2024-08-07T19:41:07Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d82856c0-3330-4ab9-b7bf-54ed48646bce","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:41:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d82856c0-3330-4ab9-b7bf-54ed48646bce\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5836 chars]
	I0807 19:41:41.795951     956 request.go:629] Waited for 197.1432ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.86:8443/api/v1/nodes/multinode-116700-m02
	I0807 19:41:41.796034     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700-m02
	I0807 19:41:41.796220     956 round_trippers.go:469] Request Headers:
	I0807 19:41:41.796220     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:41:41.796220     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:41:41.799040     956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 19:41:41.800051     956 round_trippers.go:577] Response Headers:
	I0807 19:41:41.800120     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:41:41.800120     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:41:41 GMT
	I0807 19:41:41.800120     956 round_trippers.go:580]     Audit-Id: 19c46da3-93e5-4cbd-81fa-db2d6e9cc359
	I0807 19:41:41.800120     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:41:41.800120     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:41:41.800120     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:41:41.800422     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700-m02","uid":"95fa38ae-e99d-47d4-a12c-06eb42d66eb3","resourceVersion":"689","creationTimestamp":"2024-08-07T19:41:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_07T19_41_08_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:41:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3776 chars]
	I0807 19:41:41.800849     956 pod_ready.go:92] pod "kube-proxy-vcb7n" in "kube-system" namespace has status "Ready":"True"
	I0807 19:41:41.800902     956 pod_ready.go:81] duration metric: took 405.8866ms for pod "kube-proxy-vcb7n" in "kube-system" namespace to be "Ready" ...
	I0807 19:41:41.800902     956 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-116700" in "kube-system" namespace to be "Ready" ...
	I0807 19:41:41.997442     956 request.go:629] Waited for 196.2007ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.86:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-116700
	I0807 19:41:41.997442     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-116700
	I0807 19:41:41.997698     956 round_trippers.go:469] Request Headers:
	I0807 19:41:41.997698     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:41:41.997698     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:41:42.001305     956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 19:41:42.001803     956 round_trippers.go:577] Response Headers:
	I0807 19:41:42.001803     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:41:42.001803     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:41:42.001803     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:41:42.001803     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:41:42 GMT
	I0807 19:41:42.001803     956 round_trippers.go:580]     Audit-Id: be7bd1fa-2076-4d29-98c9-6b48f70b9989
	I0807 19:41:42.001803     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:41:42.002245     956 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-116700","namespace":"kube-system","uid":"7b6df7b7-8c94-498a-bc4c-74d72efd572a","resourceVersion":"424","creationTimestamp":"2024-08-07T19:37:39Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"fde91c95fce6faff219ccfa4b0b2484c","kubernetes.io/config.mirror":"fde91c95fce6faff219ccfa4b0b2484c","kubernetes.io/config.seen":"2024-08-07T19:37:39.552047359Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4994 chars]
	I0807 19:41:42.184396     956 request.go:629] Waited for 181.3549ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.86:8443/api/v1/nodes/multinode-116700
	I0807 19:41:42.184396     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes/multinode-116700
	I0807 19:41:42.184511     956 round_trippers.go:469] Request Headers:
	I0807 19:41:42.184511     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:41:42.184511     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:41:42.189355     956 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 19:41:42.189355     956 round_trippers.go:577] Response Headers:
	I0807 19:41:42.189355     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:41:42.189355     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:41:42 GMT
	I0807 19:41:42.189355     956 round_trippers.go:580]     Audit-Id: dfd97ee9-9519-47d2-83e9-c2750a46f639
	I0807 19:41:42.189355     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:41:42.189355     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:41:42.189355     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:41:42.190110     956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"448","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0807 19:41:42.190110     956 pod_ready.go:92] pod "kube-scheduler-multinode-116700" in "kube-system" namespace has status "Ready":"True"
	I0807 19:41:42.190110     956 pod_ready.go:81] duration metric: took 389.1505ms for pod "kube-scheduler-multinode-116700" in "kube-system" namespace to be "Ready" ...
	I0807 19:41:42.190110     956 pod_ready.go:38] duration metric: took 1.2024271s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0807 19:41:42.190110     956 system_svc.go:44] waiting for kubelet service to be running ....
	I0807 19:41:42.203697     956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0807 19:41:42.228625     956 system_svc.go:56] duration metric: took 38.515ms WaitForService to wait for kubelet
	I0807 19:41:42.229098     956 kubeadm.go:582] duration metric: took 34.019724s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0807 19:41:42.229098     956 node_conditions.go:102] verifying NodePressure condition ...
	I0807 19:41:42.387845     956 request.go:629] Waited for 158.376ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.224.86:8443/api/v1/nodes
	I0807 19:41:42.387845     956 round_trippers.go:463] GET https://172.28.224.86:8443/api/v1/nodes
	I0807 19:41:42.387845     956 round_trippers.go:469] Request Headers:
	I0807 19:41:42.387961     956 round_trippers.go:473]     Accept: application/json, */*
	I0807 19:41:42.387961     956 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 19:41:42.394949     956 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0807 19:41:42.394949     956 round_trippers.go:577] Response Headers:
	I0807 19:41:42.394949     956 round_trippers.go:580]     Audit-Id: 92080a95-3b8d-4d8d-b4a0-2611836056e9
	I0807 19:41:42.394949     956 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 19:41:42.394949     956 round_trippers.go:580]     Content-Type: application/json
	I0807 19:41:42.394949     956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 19:41:42.394949     956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 19:41:42.394949     956 round_trippers.go:580]     Date: Wed, 07 Aug 2024 19:41:42 GMT
	I0807 19:41:42.394949     956 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"691"},"items":[{"metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"448","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 9780 chars]
	I0807 19:41:42.396173     956 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0807 19:41:42.396265     956 node_conditions.go:123] node cpu capacity is 2
	I0807 19:41:42.396265     956 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0807 19:41:42.396265     956 node_conditions.go:123] node cpu capacity is 2
	I0807 19:41:42.396347     956 node_conditions.go:105] duration metric: took 167.2462ms to run NodePressure ...
	I0807 19:41:42.396379     956 start.go:241] waiting for startup goroutines ...
	I0807 19:41:42.396431     956 start.go:255] writing updated cluster config ...
	I0807 19:41:42.410541     956 ssh_runner.go:195] Run: rm -f paused
	I0807 19:41:42.561793     956 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0807 19:41:42.566793     956 out.go:177] * Done! kubectl is now configured to use "multinode-116700" cluster and "default" namespace by default
	
	
	==> Docker <==
	Aug 07 19:38:14 multinode-116700 dockerd[1431]: time="2024-08-07T19:38:14.143319933Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 19:38:14 multinode-116700 dockerd[1431]: time="2024-08-07T19:38:14.180179219Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 19:38:14 multinode-116700 dockerd[1431]: time="2024-08-07T19:38:14.180402020Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 19:38:14 multinode-116700 dockerd[1431]: time="2024-08-07T19:38:14.180501621Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 19:38:14 multinode-116700 dockerd[1431]: time="2024-08-07T19:38:14.180967123Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 19:38:14 multinode-116700 cri-dockerd[1327]: time="2024-08-07T19:38:14Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d716d608049c850ae6aa31a47d17637c16e8090797103361dbbc3099b1683139/resolv.conf as [nameserver 172.28.224.1]"
	Aug 07 19:38:14 multinode-116700 cri-dockerd[1327]: time="2024-08-07T19:38:14Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/201691a17a928419c75c4967af452ef1b46c9a4a32b953e6c051368dbb35ae55/resolv.conf as [nameserver 172.28.224.1]"
	Aug 07 19:38:14 multinode-116700 dockerd[1431]: time="2024-08-07T19:38:14.607081035Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 19:38:14 multinode-116700 dockerd[1431]: time="2024-08-07T19:38:14.607485638Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 19:38:14 multinode-116700 dockerd[1431]: time="2024-08-07T19:38:14.607500738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 19:38:14 multinode-116700 dockerd[1431]: time="2024-08-07T19:38:14.607637639Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 19:38:14 multinode-116700 dockerd[1431]: time="2024-08-07T19:38:14.703447353Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 19:38:14 multinode-116700 dockerd[1431]: time="2024-08-07T19:38:14.703668855Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 19:38:14 multinode-116700 dockerd[1431]: time="2024-08-07T19:38:14.705908371Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 19:38:14 multinode-116700 dockerd[1431]: time="2024-08-07T19:38:14.706202273Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 19:42:08 multinode-116700 dockerd[1431]: time="2024-08-07T19:42:08.840500977Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 19:42:08 multinode-116700 dockerd[1431]: time="2024-08-07T19:42:08.840656078Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 19:42:08 multinode-116700 dockerd[1431]: time="2024-08-07T19:42:08.840696479Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 19:42:08 multinode-116700 dockerd[1431]: time="2024-08-07T19:42:08.840988681Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 19:42:09 multinode-116700 cri-dockerd[1327]: time="2024-08-07T19:42:09Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/466d29d2ebc7460e3202c73f559ffc85039ccc3b0e7a92b36bc63da373c1e015/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Aug 07 19:42:10 multinode-116700 cri-dockerd[1327]: time="2024-08-07T19:42:10Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Aug 07 19:42:10 multinode-116700 dockerd[1431]: time="2024-08-07T19:42:10.531744959Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 19:42:10 multinode-116700 dockerd[1431]: time="2024-08-07T19:42:10.532029763Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 19:42:10 multinode-116700 dockerd[1431]: time="2024-08-07T19:42:10.532064463Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 19:42:10 multinode-116700 dockerd[1431]: time="2024-08-07T19:42:10.532494969Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4cb0f5f04f1c3       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   50 seconds ago      Running             busybox                   0                   466d29d2ebc74       busybox-fc5497c4f-s4njd
	32f103de03d30       cbb01a7bd410d                                                                                         4 minutes ago       Running             coredns                   0                   201691a17a928       coredns-7db6d8ff4d-7l6v2
	b6325ae79a145       6e38f40d628db                                                                                         4 minutes ago       Running             storage-provisioner       0                   d716d608049c8       storage-provisioner
	ec2579bb9d23c       kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3              4 minutes ago       Running             kindnet-cni               0                   0877557fcf515       kindnet-kltmx
	3b896a77f5466       55bb025d2cfa5                                                                                         5 minutes ago       Running             kube-proxy                0                   9fd565bc62073       kube-proxy-fmjt9
	1415d4256b4a2       3edc18e7b7672                                                                                         5 minutes ago       Running             kube-scheduler            0                   1e5d82deee2fc       kube-scheduler-multinode-116700
	c90df84145cbd       3861cfcd7c04c                                                                                         5 minutes ago       Running             etcd                      0                   92cf9118dac26       etcd-multinode-116700
	1dbaa8c7ed692       1f6d574d502f3                                                                                         5 minutes ago       Running             kube-apiserver            0                   548a9e3a6616b       kube-apiserver-multinode-116700
	c50e3a9ac99f7       76932a3b37d7e                                                                                         5 minutes ago       Running             kube-controller-manager   0                   3047b2dc6a149       kube-controller-manager-multinode-116700
	
	
	==> coredns [32f103de03d3] <==
	[INFO] 10.244.1.2:36183 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000215203s
	[INFO] 10.244.0.3:45757 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000169502s
	[INFO] 10.244.0.3:50310 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000072801s
	[INFO] 10.244.0.3:40617 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000124501s
	[INFO] 10.244.0.3:49260 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000123802s
	[INFO] 10.244.0.3:53569 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000158302s
	[INFO] 10.244.0.3:46373 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000141702s
	[INFO] 10.244.0.3:45713 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000223603s
	[INFO] 10.244.0.3:33908 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000127102s
	[INFO] 10.244.1.2:40170 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108401s
	[INFO] 10.244.1.2:52007 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000168402s
	[INFO] 10.244.1.2:41791 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000184802s
	[INFO] 10.244.1.2:51153 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000444005s
	[INFO] 10.244.0.3:40520 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000232003s
	[INFO] 10.244.0.3:53668 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000213402s
	[INFO] 10.244.0.3:47531 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000282304s
	[INFO] 10.244.0.3:40942 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000122801s
	[INFO] 10.244.1.2:50193 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000186002s
	[INFO] 10.244.1.2:35238 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000111802s
	[INFO] 10.244.1.2:36248 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000084101s
	[INFO] 10.244.1.2:44351 - 5 "PTR IN 1.224.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000084301s
	[INFO] 10.244.0.3:34541 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000090901s
	[INFO] 10.244.0.3:50610 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000096301s
	[INFO] 10.244.0.3:37269 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000299303s
	[INFO] 10.244.0.3:35820 - 5 "PTR IN 1.224.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000089001s
	
	
	==> describe nodes <==
	Name:               multinode-116700
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-116700
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e
	                    minikube.k8s.io/name=multinode-116700
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_07T19_37_40_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 07 Aug 2024 19:37:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-116700
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 07 Aug 2024 19:42:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 07 Aug 2024 19:42:46 +0000   Wed, 07 Aug 2024 19:37:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 07 Aug 2024 19:42:46 +0000   Wed, 07 Aug 2024 19:37:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 07 Aug 2024 19:42:46 +0000   Wed, 07 Aug 2024 19:37:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 07 Aug 2024 19:42:46 +0000   Wed, 07 Aug 2024 19:38:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.28.224.86
	  Hostname:    multinode-116700
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 990c1b8b89824244b514384d46a1db99
	  System UUID:                f157be28-68de-9a48-8750-bc5dcec03341
	  Boot ID:                    6e966027-f114-4652-b99d-d747bd061686
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-s4njd                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         52s
	  kube-system                 coredns-7db6d8ff4d-7l6v2                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     5m7s
	  kube-system                 etcd-multinode-116700                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m21s
	  kube-system                 kindnet-kltmx                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m7s
	  kube-system                 kube-apiserver-multinode-116700             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m22s
	  kube-system                 kube-controller-manager-multinode-116700    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m21s
	  kube-system                 kube-proxy-fmjt9                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m7s
	  kube-system                 kube-scheduler-multinode-116700             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m21s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m4s                   kube-proxy       
	  Normal  Starting                 5m29s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m29s (x8 over 5m29s)  kubelet          Node multinode-116700 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m29s (x8 over 5m29s)  kubelet          Node multinode-116700 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m29s (x7 over 5m29s)  kubelet          Node multinode-116700 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m29s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 5m21s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m21s                  kubelet          Node multinode-116700 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m21s                  kubelet          Node multinode-116700 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m21s                  kubelet          Node multinode-116700 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m8s                   node-controller  Node multinode-116700 event: Registered Node multinode-116700 in Controller
	  Normal  NodeReady                4m47s                  kubelet          Node multinode-116700 status is now: NodeReady
	
	
	Name:               multinode-116700-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-116700-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e
	                    minikube.k8s.io/name=multinode-116700
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_07T19_41_08_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 07 Aug 2024 19:41:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-116700-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 07 Aug 2024 19:42:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 07 Aug 2024 19:42:39 +0000   Wed, 07 Aug 2024 19:41:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 07 Aug 2024 19:42:39 +0000   Wed, 07 Aug 2024 19:41:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 07 Aug 2024 19:42:39 +0000   Wed, 07 Aug 2024 19:41:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 07 Aug 2024 19:42:39 +0000   Wed, 07 Aug 2024 19:41:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.28.226.55
	  Hostname:    multinode-116700-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 c49ef1cc90f24b7ab5f81237ccd4f927
	  System UUID:                42521705-30fc-8045-86f4-7e91b71785af
	  Boot ID:                    73fa879f-0034-4d78-82fd-ae0e4a83f35e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-jpc88    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         52s
	  kube-system                 kindnet-gk542              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      113s
	  kube-system                 kube-proxy-vcb7n           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         113s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 101s                 kube-proxy       
	  Normal  RegisteredNode           113s                 node-controller  Node multinode-116700-m02 event: Registered Node multinode-116700-m02 in Controller
	  Normal  NodeHasSufficientMemory  113s (x2 over 113s)  kubelet          Node multinode-116700-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    113s (x2 over 113s)  kubelet          Node multinode-116700-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     113s (x2 over 113s)  kubelet          Node multinode-116700-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  113s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                80s                  kubelet          Node multinode-116700-m02 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Aug 7 19:36] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	[  +0.169891] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[Aug 7 19:37] systemd-fstab-generator[999]: Ignoring "noauto" option for root device
	[  +0.111774] kauditd_printk_skb: 65 callbacks suppressed
	[  +0.517257] systemd-fstab-generator[1038]: Ignoring "noauto" option for root device
	[  +0.206011] systemd-fstab-generator[1051]: Ignoring "noauto" option for root device
	[  +0.243511] systemd-fstab-generator[1065]: Ignoring "noauto" option for root device
	[  +2.869269] systemd-fstab-generator[1280]: Ignoring "noauto" option for root device
	[  +0.205034] systemd-fstab-generator[1292]: Ignoring "noauto" option for root device
	[  +0.196987] systemd-fstab-generator[1304]: Ignoring "noauto" option for root device
	[  +0.269903] systemd-fstab-generator[1320]: Ignoring "noauto" option for root device
	[ +11.579046] systemd-fstab-generator[1417]: Ignoring "noauto" option for root device
	[  +0.105250] kauditd_printk_skb: 202 callbacks suppressed
	[  +3.789753] systemd-fstab-generator[1665]: Ignoring "noauto" option for root device
	[  +6.453068] systemd-fstab-generator[1868]: Ignoring "noauto" option for root device
	[  +0.112899] kauditd_printk_skb: 70 callbacks suppressed
	[  +9.039790] systemd-fstab-generator[2274]: Ignoring "noauto" option for root device
	[  +0.133457] kauditd_printk_skb: 62 callbacks suppressed
	[ +13.458572] systemd-fstab-generator[2458]: Ignoring "noauto" option for root device
	[  +0.208814] kauditd_printk_skb: 12 callbacks suppressed
	[Aug 7 19:38] kauditd_printk_skb: 51 callbacks suppressed
	[Aug 7 19:41] hrtimer: interrupt took 3620626 ns
	[Aug 7 19:42] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [c90df84145cb] <==
	{"level":"info","ts":"2024-08-07T19:37:34.15388Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-07T19:37:34.15687Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-07T19:37:34.15701Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-07T19:37:34.157908Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"a5391e13896074eb","local-member-id":"56b8c59874c680","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-07T19:37:34.161976Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-07T19:37:34.162121Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-07T19:37:34.169891Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-07T19:38:01.754703Z","caller":"traceutil/trace.go:171","msg":"trace[991290399] linearizableReadLoop","detail":"{readStateIndex:449; appliedIndex:448; }","duration":"264.868726ms","start":"2024-08-07T19:38:01.489813Z","end":"2024-08-07T19:38:01.754682Z","steps":["trace[991290399] 'read index received'  (duration: 264.727325ms)","trace[991290399] 'applied index is now lower than readState.Index'  (duration: 140.901µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-07T19:38:01.755536Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"265.529131ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-116700\" ","response":"range_response_count:1 size:4486"}
	{"level":"info","ts":"2024-08-07T19:38:01.756042Z","caller":"traceutil/trace.go:171","msg":"trace[1115331895] range","detail":"{range_begin:/registry/minions/multinode-116700; range_end:; response_count:1; response_revision:436; }","duration":"266.249637ms","start":"2024-08-07T19:38:01.489779Z","end":"2024-08-07T19:38:01.756029Z","steps":["trace[1115331895] 'agreement among raft nodes before linearized reading'  (duration: 265.529531ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-07T19:38:01.756518Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"165.615767ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-07T19:38:01.756745Z","caller":"traceutil/trace.go:171","msg":"trace[1948784290] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:436; }","duration":"165.871969ms","start":"2024-08-07T19:38:01.590864Z","end":"2024-08-07T19:38:01.756736Z","steps":["trace[1948784290] 'agreement among raft nodes before linearized reading'  (duration: 165.630567ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-07T19:38:01.756757Z","caller":"traceutil/trace.go:171","msg":"trace[972687959] transaction","detail":"{read_only:false; response_revision:436; number_of_response:1; }","duration":"502.644845ms","start":"2024-08-07T19:38:01.252285Z","end":"2024-08-07T19:38:01.754929Z","steps":["trace[972687959] 'process raft request'  (duration: 502.288343ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-07T19:38:01.757754Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-07T19:38:01.252271Z","time spent":"504.994263ms","remote":"127.0.0.1:42878","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2942,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/storage-provisioner\" mod_revision:434 > success:<request_put:<key:\"/registry/pods/kube-system/storage-provisioner\" value_size:2888 >> failure:<request_range:<key:\"/registry/pods/kube-system/storage-provisioner\" > >"}
	{"level":"info","ts":"2024-08-07T19:41:00.683591Z","caller":"traceutil/trace.go:171","msg":"trace[394809206] transaction","detail":"{read_only:false; response_revision:602; number_of_response:1; }","duration":"268.257188ms","start":"2024-08-07T19:41:00.415314Z","end":"2024-08-07T19:41:00.683571Z","steps":["trace[394809206] 'process raft request'  (duration: 268.128887ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-07T19:41:00.686593Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.956993ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/validatingwebhookconfigurations/\" range_end:\"/registry/validatingwebhookconfigurations0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-07T19:41:00.68663Z","caller":"traceutil/trace.go:171","msg":"trace[475308095] range","detail":"{range_begin:/registry/validatingwebhookconfigurations/; range_end:/registry/validatingwebhookconfigurations0; response_count:0; response_revision:602; }","duration":"102.211495ms","start":"2024-08-07T19:41:00.584409Z","end":"2024-08-07T19:41:00.68662Z","steps":["trace[475308095] 'agreement among raft nodes before linearized reading'  (duration: 99.701476ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-07T19:41:18.789804Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"284.621051ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-116700-m02\" ","response":"range_response_count:1 size:3148"}
	{"level":"info","ts":"2024-08-07T19:41:18.790091Z","caller":"traceutil/trace.go:171","msg":"trace[1205986869] range","detail":"{range_begin:/registry/minions/multinode-116700-m02; range_end:; response_count:1; response_revision:655; }","duration":"284.976255ms","start":"2024-08-07T19:41:18.505097Z","end":"2024-08-07T19:41:18.790073Z","steps":["trace[1205986869] 'range keys from in-memory index tree'  (duration: 284.496351ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-07T19:41:18.790387Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"275.086283ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/172.28.224.86\" ","response":"range_response_count:1 size:133"}
	{"level":"info","ts":"2024-08-07T19:41:18.790531Z","caller":"traceutil/trace.go:171","msg":"trace[1747279970] range","detail":"{range_begin:/registry/masterleases/172.28.224.86; range_end:; response_count:1; response_revision:655; }","duration":"275.326185ms","start":"2024-08-07T19:41:18.515196Z","end":"2024-08-07T19:41:18.790522Z","steps":["trace[1747279970] 'range keys from in-memory index tree'  (duration: 274.999383ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-07T19:41:18.791019Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"200.370544ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-07T19:41:18.79105Z","caller":"traceutil/trace.go:171","msg":"trace[1957376032] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:655; }","duration":"200.421645ms","start":"2024-08-07T19:41:18.59062Z","end":"2024-08-07T19:41:18.791042Z","steps":["trace[1957376032] 'range keys from in-memory index tree'  (duration: 200.311344ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-07T19:41:20.134678Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"133.343654ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-116700-m02\" ","response":"range_response_count:1 size:3148"}
	{"level":"info","ts":"2024-08-07T19:41:20.134956Z","caller":"traceutil/trace.go:171","msg":"trace[213951908] range","detail":"{range_begin:/registry/minions/multinode-116700-m02; range_end:; response_count:1; response_revision:662; }","duration":"133.656156ms","start":"2024-08-07T19:41:20.001284Z","end":"2024-08-07T19:41:20.13494Z","steps":["trace[213951908] 'range keys from in-memory index tree'  (duration: 133.063451ms)"],"step_count":1}
	
	
	==> kernel <==
	 19:43:00 up 7 min,  0 users,  load average: 0.27, 0.31, 0.17
	Linux multinode-116700 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [ec2579bb9d23] <==
	I0807 19:41:53.231876       1 main.go:322] Node multinode-116700-m02 has CIDR [10.244.1.0/24] 
	I0807 19:42:03.231384       1 main.go:295] Handling node with IPs: map[172.28.226.55:{}]
	I0807 19:42:03.231610       1 main.go:322] Node multinode-116700-m02 has CIDR [10.244.1.0/24] 
	I0807 19:42:03.231798       1 main.go:295] Handling node with IPs: map[172.28.224.86:{}]
	I0807 19:42:03.231904       1 main.go:299] handling current node
	I0807 19:42:13.232871       1 main.go:295] Handling node with IPs: map[172.28.226.55:{}]
	I0807 19:42:13.232997       1 main.go:322] Node multinode-116700-m02 has CIDR [10.244.1.0/24] 
	I0807 19:42:13.233454       1 main.go:295] Handling node with IPs: map[172.28.224.86:{}]
	I0807 19:42:13.233546       1 main.go:299] handling current node
	I0807 19:42:23.231669       1 main.go:295] Handling node with IPs: map[172.28.224.86:{}]
	I0807 19:42:23.231857       1 main.go:299] handling current node
	I0807 19:42:23.231880       1 main.go:295] Handling node with IPs: map[172.28.226.55:{}]
	I0807 19:42:23.231889       1 main.go:322] Node multinode-116700-m02 has CIDR [10.244.1.0/24] 
	I0807 19:42:33.240281       1 main.go:295] Handling node with IPs: map[172.28.224.86:{}]
	I0807 19:42:33.240518       1 main.go:299] handling current node
	I0807 19:42:33.240666       1 main.go:295] Handling node with IPs: map[172.28.226.55:{}]
	I0807 19:42:33.240678       1 main.go:322] Node multinode-116700-m02 has CIDR [10.244.1.0/24] 
	I0807 19:42:43.231999       1 main.go:295] Handling node with IPs: map[172.28.224.86:{}]
	I0807 19:42:43.232127       1 main.go:299] handling current node
	I0807 19:42:43.232149       1 main.go:295] Handling node with IPs: map[172.28.226.55:{}]
	I0807 19:42:43.232157       1 main.go:322] Node multinode-116700-m02 has CIDR [10.244.1.0/24] 
	I0807 19:42:53.233868       1 main.go:295] Handling node with IPs: map[172.28.224.86:{}]
	I0807 19:42:53.234010       1 main.go:299] handling current node
	I0807 19:42:53.234031       1 main.go:295] Handling node with IPs: map[172.28.226.55:{}]
	I0807 19:42:53.234039       1 main.go:322] Node multinode-116700-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [1dbaa8c7ed69] <==
	I0807 19:37:38.504376       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0807 19:37:38.520291       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.28.224.86]
	I0807 19:37:38.521758       1 controller.go:615] quota admission added evaluator for: endpoints
	I0807 19:37:38.541792       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0807 19:37:38.952014       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0807 19:37:39.565456       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0807 19:37:39.601454       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0807 19:37:39.626101       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0807 19:37:53.197093       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0807 19:37:53.236509       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0807 19:38:01.758856       1 trace.go:236] Trace[139492760]: "Patch" accept:application/vnd.kubernetes.protobuf, */*,audit-id:a17239f1-1c81-41de-b301-b4457018e583,client:172.28.224.86,api-group:,api-version:v1,name:storage-provisioner,subresource:status,namespace:kube-system,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods/storage-provisioner/status,user-agent:kube-scheduler/v1.30.3 (linux/amd64) kubernetes/6fc0a69/scheduler,verb:PATCH (07-Aug-2024 19:38:01.250) (total time: 508ms):
	Trace[139492760]: ["GuaranteedUpdate etcd3" audit-id:a17239f1-1c81-41de-b301-b4457018e583,key:/pods/kube-system/storage-provisioner,type:*core.Pod,resource:pods 508ms (19:38:01.250)
	Trace[139492760]:  ---"Txn call completed" 507ms (19:38:01.758)]
	Trace[139492760]: ---"Object stored in database" 507ms (19:38:01.758)
	Trace[139492760]: [508.660291ms] [508.660291ms] END
	E0807 19:42:14.056287       1 conn.go:339] Error on socket receive: read tcp 172.28.224.86:8443->172.28.224.1:51860: use of closed network connection
	E0807 19:42:14.618480       1 conn.go:339] Error on socket receive: read tcp 172.28.224.86:8443->172.28.224.1:51862: use of closed network connection
	E0807 19:42:15.234178       1 conn.go:339] Error on socket receive: read tcp 172.28.224.86:8443->172.28.224.1:51864: use of closed network connection
	E0807 19:42:15.786219       1 conn.go:339] Error on socket receive: read tcp 172.28.224.86:8443->172.28.224.1:51866: use of closed network connection
	E0807 19:42:16.299016       1 conn.go:339] Error on socket receive: read tcp 172.28.224.86:8443->172.28.224.1:51868: use of closed network connection
	E0807 19:42:16.833583       1 conn.go:339] Error on socket receive: read tcp 172.28.224.86:8443->172.28.224.1:51870: use of closed network connection
	E0807 19:42:17.784083       1 conn.go:339] Error on socket receive: read tcp 172.28.224.86:8443->172.28.224.1:51873: use of closed network connection
	E0807 19:42:28.316062       1 conn.go:339] Error on socket receive: read tcp 172.28.224.86:8443->172.28.224.1:51876: use of closed network connection
	E0807 19:42:28.827020       1 conn.go:339] Error on socket receive: read tcp 172.28.224.86:8443->172.28.224.1:51880: use of closed network connection
	E0807 19:42:39.361555       1 conn.go:339] Error on socket receive: read tcp 172.28.224.86:8443->172.28.224.1:51882: use of closed network connection
	
	
	==> kube-controller-manager [c50e3a9ac99f] <==
	I0807 19:37:52.910632       1 shared_informer.go:320] Caches are synced for garbage collector
	I0807 19:37:53.504346       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="247.541063ms"
	I0807 19:37:53.525454       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="21.055626ms"
	I0807 19:37:53.526403       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="87.4µs"
	I0807 19:37:53.634244       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="57.65272ms"
	I0807 19:37:53.669974       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="35.185579ms"
	I0807 19:37:53.670426       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="167.602µs"
	I0807 19:38:13.539079       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="52.4µs"
	I0807 19:38:13.575698       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="42.801µs"
	I0807 19:38:15.564638       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="47.302µs"
	I0807 19:38:15.624519       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="23.434358ms"
	I0807 19:38:15.625391       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="685.725µs"
	I0807 19:38:17.421298       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0807 19:41:07.170093       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-116700-m02\" does not exist"
	I0807 19:41:07.185316       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-116700-m02" podCIDRs=["10.244.1.0/24"]
	I0807 19:41:07.454154       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-116700-m02"
	I0807 19:41:40.538335       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-116700-m02"
	I0807 19:42:08.245298       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="81.57851ms"
	I0807 19:42:08.263355       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="17.757411ms"
	I0807 19:42:08.263438       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.4µs"
	I0807 19:42:08.280233       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="31µs"
	I0807 19:42:10.760509       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.696319ms"
	I0807 19:42:10.760870       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="280.004µs"
	I0807 19:42:11.047780       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.574392ms"
	I0807 19:42:11.048227       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.101µs"
	
	
	==> kube-proxy [3b896a77f546] <==
	I0807 19:37:55.892896       1 server_linux.go:69] "Using iptables proxy"
	I0807 19:37:55.906357       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.28.224.86"]
	I0807 19:37:55.960523       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0807 19:37:55.960664       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0807 19:37:55.960687       1 server_linux.go:165] "Using iptables Proxier"
	I0807 19:37:55.964705       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0807 19:37:55.965221       1 server.go:872] "Version info" version="v1.30.3"
	I0807 19:37:55.965238       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0807 19:37:55.966667       1 config.go:192] "Starting service config controller"
	I0807 19:37:55.966715       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0807 19:37:55.966748       1 config.go:101] "Starting endpoint slice config controller"
	I0807 19:37:55.966754       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0807 19:37:55.970324       1 config.go:319] "Starting node config controller"
	I0807 19:37:55.971420       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0807 19:37:56.067062       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0807 19:37:56.067134       1 shared_informer.go:320] Caches are synced for service config
	I0807 19:37:56.072467       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [1415d4256b4a] <==
	W0807 19:37:37.140435       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0807 19:37:37.140497       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0807 19:37:37.214572       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0807 19:37:37.214631       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0807 19:37:37.217151       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0807 19:37:37.217434       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0807 19:37:37.275895       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0807 19:37:37.276164       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0807 19:37:37.355238       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0807 19:37:37.355363       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0807 19:37:37.371774       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0807 19:37:37.372551       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0807 19:37:37.382311       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0807 19:37:37.382673       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0807 19:37:37.471613       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0807 19:37:37.471897       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0807 19:37:37.535975       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0807 19:37:37.536122       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0807 19:37:37.562575       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0807 19:37:37.563626       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0807 19:37:37.617226       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0807 19:37:37.617453       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0807 19:37:37.669556       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0807 19:37:37.670249       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0807 19:37:40.152655       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 07 19:38:39 multinode-116700 kubelet[2281]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 07 19:38:39 multinode-116700 kubelet[2281]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 07 19:38:39 multinode-116700 kubelet[2281]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 07 19:39:39 multinode-116700 kubelet[2281]: E0807 19:39:39.700986    2281 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 07 19:39:39 multinode-116700 kubelet[2281]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 07 19:39:39 multinode-116700 kubelet[2281]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 07 19:39:39 multinode-116700 kubelet[2281]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 07 19:39:39 multinode-116700 kubelet[2281]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 07 19:40:39 multinode-116700 kubelet[2281]: E0807 19:40:39.702613    2281 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 07 19:40:39 multinode-116700 kubelet[2281]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 07 19:40:39 multinode-116700 kubelet[2281]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 07 19:40:39 multinode-116700 kubelet[2281]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 07 19:40:39 multinode-116700 kubelet[2281]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 07 19:41:39 multinode-116700 kubelet[2281]: E0807 19:41:39.703497    2281 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 07 19:41:39 multinode-116700 kubelet[2281]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 07 19:41:39 multinode-116700 kubelet[2281]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 07 19:41:39 multinode-116700 kubelet[2281]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 07 19:41:39 multinode-116700 kubelet[2281]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 07 19:42:08 multinode-116700 kubelet[2281]: I0807 19:42:08.237523    2281 topology_manager.go:215] "Topology Admit Handler" podUID="e89136fe-dd58-4e76-b6e8-4a71c0f51bbb" podNamespace="default" podName="busybox-fc5497c4f-s4njd"
	Aug 07 19:42:08 multinode-116700 kubelet[2281]: I0807 19:42:08.380320    2281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4td6d\" (UniqueName: \"kubernetes.io/projected/e89136fe-dd58-4e76-b6e8-4a71c0f51bbb-kube-api-access-4td6d\") pod \"busybox-fc5497c4f-s4njd\" (UID: \"e89136fe-dd58-4e76-b6e8-4a71c0f51bbb\") " pod="default/busybox-fc5497c4f-s4njd"
	Aug 07 19:42:39 multinode-116700 kubelet[2281]: E0807 19:42:39.709703    2281 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 07 19:42:39 multinode-116700 kubelet[2281]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 07 19:42:39 multinode-116700 kubelet[2281]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 07 19:42:39 multinode-116700 kubelet[2281]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 07 19:42:39 multinode-116700 kubelet[2281]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0807 19:42:52.098686   13076 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-116700 -n multinode-116700
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-116700 -n multinode-116700: (12.6808987s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-116700 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (58.72s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (396.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-116700
multinode_test.go:321: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-116700
E0807 19:58:38.179779    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-100700\client.crt: The system cannot find the path specified.
multinode_test.go:321: (dbg) Done: out/minikube-windows-amd64.exe stop -p multinode-116700: (1m39.7998473s)
multinode_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-116700 --wait=true -v=8 --alsologtostderr
E0807 20:01:23.785319    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\client.crt: The system cannot find the path specified.
E0807 20:03:20.558660    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\client.crt: The system cannot find the path specified.
E0807 20:03:38.178570    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-100700\client.crt: The system cannot find the path specified.
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-116700 --wait=true -v=8 --alsologtostderr: exit status 1 (4m18.8625056s)

                                                
                                                
-- stdout --
	* [multinode-116700] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4717 Build 19045.4717
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19389
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting "multinode-116700" primary control-plane node in "multinode-116700" cluster
	* Restarting existing hyperv VM for "multinode-116700" ...
	* Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	
	* Starting "multinode-116700-m02" worker node in "multinode-116700" cluster
	* Restarting existing hyperv VM for "multinode-116700-m02" ...

                                                
                                                
-- /stdout --
** stderr ** 
	W0807 20:00:10.023005    1172 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0807 20:00:10.103540    1172 out.go:291] Setting OutFile to fd 1724 ...
	I0807 20:00:10.104539    1172 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 20:00:10.104539    1172 out.go:304] Setting ErrFile to fd 1728...
	I0807 20:00:10.104539    1172 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 20:00:10.127592    1172 out.go:298] Setting JSON to false
	I0807 20:00:10.131531    1172 start.go:129] hostinfo: {"hostname":"minikube6","uptime":322739,"bootTime":1722738070,"procs":188,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4717 Build 19045.4717","kernelVersion":"10.0.19045.4717 Build 19045.4717","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0807 20:00:10.131531    1172 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0807 20:00:10.177966    1172 out.go:177] * [multinode-116700] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4717 Build 19045.4717
	I0807 20:00:10.279375    1172 notify.go:220] Checking for updates...
	I0807 20:00:10.299518    1172 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0807 20:00:10.328135    1172 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0807 20:00:10.339615    1172 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0807 20:00:10.354482    1172 out.go:177]   - MINIKUBE_LOCATION=19389
	I0807 20:00:10.382547    1172 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0807 20:00:10.392146    1172 config.go:182] Loaded profile config "multinode-116700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 20:00:10.392699    1172 driver.go:392] Setting default libvirt URI to qemu:///system
	I0807 20:00:16.037086    1172 out.go:177] * Using the hyperv driver based on existing profile
	I0807 20:00:16.050435    1172 start.go:297] selected driver: hyperv
	I0807 20:00:16.050435    1172 start.go:901] validating driver "hyperv" against &{Name:multinode-116700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.30.3 ClusterName:multinode-116700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.224.86 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.226.55 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.28.226.146 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:f
alse ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mo
untUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 20:00:16.051557    1172 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0807 20:00:16.109811    1172 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0807 20:00:16.109811    1172 cni.go:84] Creating CNI manager for ""
	I0807 20:00:16.109811    1172 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0807 20:00:16.109811    1172 start.go:340] cluster config:
	{Name:multinode-116700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-116700 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.224.86 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.226.55 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.28.226.146 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisione
r:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 20:00:16.109811    1172 iso.go:125] acquiring lock: {Name:mk51465eaa337f49a286b30986b5f3d5f63e6787 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 20:00:16.140527    1172 out.go:177] * Starting "multinode-116700" primary control-plane node in "multinode-116700" cluster
	I0807 20:00:16.144337    1172 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0807 20:00:16.144451    1172 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0807 20:00:16.144451    1172 cache.go:56] Caching tarball of preloaded images
	I0807 20:00:16.144733    1172 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0807 20:00:16.144733    1172 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0807 20:00:16.145402    1172 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\config.json ...
	I0807 20:00:16.148192    1172 start.go:360] acquireMachinesLock for multinode-116700: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 20:00:16.148277    1172 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-116700"
	I0807 20:00:16.148277    1172 start.go:96] Skipping create...Using existing machine configuration
	I0807 20:00:16.148277    1172 fix.go:54] fixHost starting: 
	I0807 20:00:16.148848    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700 ).state
	I0807 20:00:19.012991    1172 main.go:141] libmachine: [stdout =====>] : Off
	
	I0807 20:00:19.012991    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:00:19.012991    1172 fix.go:112] recreateIfNeeded on multinode-116700: state=Stopped err=<nil>
	W0807 20:00:19.012991    1172 fix.go:138] unexpected machine state, will restart: <nil>
	I0807 20:00:19.016044    1172 out.go:177] * Restarting existing hyperv VM for "multinode-116700" ...
	I0807 20:00:19.020008    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-116700
	I0807 20:00:22.162390    1172 main.go:141] libmachine: [stdout =====>] : 
	I0807 20:00:22.163445    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:00:22.163445    1172 main.go:141] libmachine: Waiting for host to start...
	I0807 20:00:22.163445    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700 ).state
	I0807 20:00:24.521154    1172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 20:00:24.521154    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:00:24.521154    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700 ).networkadapters[0]).ipaddresses[0]
	I0807 20:00:27.104032    1172 main.go:141] libmachine: [stdout =====>] : 
	I0807 20:00:27.104082    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:00:28.118286    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700 ).state
	I0807 20:00:30.414258    1172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 20:00:30.414258    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:00:30.414861    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700 ).networkadapters[0]).ipaddresses[0]
	I0807 20:00:33.039773    1172 main.go:141] libmachine: [stdout =====>] : 
	I0807 20:00:33.039773    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:00:34.054566    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700 ).state
	I0807 20:00:36.376044    1172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 20:00:36.376044    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:00:36.376044    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700 ).networkadapters[0]).ipaddresses[0]
	I0807 20:00:39.072711    1172 main.go:141] libmachine: [stdout =====>] : 
	I0807 20:00:39.072945    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:00:40.075457    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700 ).state
	I0807 20:00:42.436819    1172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 20:00:42.436819    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:00:42.436819    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700 ).networkadapters[0]).ipaddresses[0]
	I0807 20:00:45.086832    1172 main.go:141] libmachine: [stdout =====>] : 
	I0807 20:00:45.086832    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:00:46.100853    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700 ).state
	I0807 20:00:48.407380    1172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 20:00:48.407380    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:00:48.407497    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700 ).networkadapters[0]).ipaddresses[0]
	I0807 20:00:51.060536    1172 main.go:141] libmachine: [stdout =====>] : 172.28.226.95
	
	I0807 20:00:51.060536    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:00:51.064184    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700 ).state
	I0807 20:00:53.361508    1172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 20:00:53.361508    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:00:53.361850    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700 ).networkadapters[0]).ipaddresses[0]
	I0807 20:00:56.108200    1172 main.go:141] libmachine: [stdout =====>] : 172.28.226.95
	
	I0807 20:00:56.108200    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:00:56.109427    1172 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\config.json ...
	I0807 20:00:56.113460    1172 machine.go:94] provisionDockerMachine start ...
	I0807 20:00:56.113591    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700 ).state
	I0807 20:00:58.409696    1172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 20:00:58.409696    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:00:58.410589    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700 ).networkadapters[0]).ipaddresses[0]
	I0807 20:01:01.042695    1172 main.go:141] libmachine: [stdout =====>] : 172.28.226.95
	
	I0807 20:01:01.042695    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:01:01.048860    1172 main.go:141] libmachine: Using SSH client type: native
	I0807 20:01:01.049544    1172 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.226.95 22 <nil> <nil>}
	I0807 20:01:01.049544    1172 main.go:141] libmachine: About to run SSH command:
	hostname
	I0807 20:01:01.183207    1172 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0807 20:01:01.183207    1172 buildroot.go:166] provisioning hostname "multinode-116700"
	I0807 20:01:01.183207    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700 ).state
	I0807 20:01:03.374260    1172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 20:01:03.374260    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:01:03.374260    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700 ).networkadapters[0]).ipaddresses[0]
	I0807 20:01:06.002746    1172 main.go:141] libmachine: [stdout =====>] : 172.28.226.95
	
	I0807 20:01:06.003046    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:01:06.008544    1172 main.go:141] libmachine: Using SSH client type: native
	I0807 20:01:06.008732    1172 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.226.95 22 <nil> <nil>}
	I0807 20:01:06.008732    1172 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-116700 && echo "multinode-116700" | sudo tee /etc/hostname
	I0807 20:01:06.164405    1172 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-116700
	
	I0807 20:01:06.164405    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700 ).state
	I0807 20:01:08.426773    1172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 20:01:08.426773    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:01:08.427327    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700 ).networkadapters[0]).ipaddresses[0]
	I0807 20:01:11.067823    1172 main.go:141] libmachine: [stdout =====>] : 172.28.226.95
	
	I0807 20:01:11.068100    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:01:11.074027    1172 main.go:141] libmachine: Using SSH client type: native
	I0807 20:01:11.074027    1172 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.226.95 22 <nil> <nil>}
	I0807 20:01:11.074550    1172 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-116700' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-116700/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-116700' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0807 20:01:11.233499    1172 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0807 20:01:11.233499    1172 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0807 20:01:11.233640    1172 buildroot.go:174] setting up certificates
	I0807 20:01:11.233640    1172 provision.go:84] configureAuth start
	I0807 20:01:11.233676    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700 ).state
	I0807 20:01:13.441181    1172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 20:01:13.441409    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:01:13.441409    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700 ).networkadapters[0]).ipaddresses[0]
	I0807 20:01:16.040959    1172 main.go:141] libmachine: [stdout =====>] : 172.28.226.95
	
	I0807 20:01:16.040959    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:01:16.040959    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700 ).state
	I0807 20:01:18.311508    1172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 20:01:18.311508    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:01:18.311987    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700 ).networkadapters[0]).ipaddresses[0]
	I0807 20:01:20.973941    1172 main.go:141] libmachine: [stdout =====>] : 172.28.226.95
	
	I0807 20:01:20.973941    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:01:20.973941    1172 provision.go:143] copyHostCerts
	I0807 20:01:20.974270    1172 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0807 20:01:20.974693    1172 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0807 20:01:20.974693    1172 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0807 20:01:20.975393    1172 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0807 20:01:20.976694    1172 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0807 20:01:20.976694    1172 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0807 20:01:20.977307    1172 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0807 20:01:20.977307    1172 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0807 20:01:20.978871    1172 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0807 20:01:20.979404    1172 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0807 20:01:20.979404    1172 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0807 20:01:20.979614    1172 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0807 20:01:20.981267    1172 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-116700 san=[127.0.0.1 172.28.226.95 localhost minikube multinode-116700]
	I0807 20:01:21.124252    1172 provision.go:177] copyRemoteCerts
	I0807 20:01:21.135716    1172 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0807 20:01:21.135716    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700 ).state
	I0807 20:01:23.382416    1172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 20:01:23.382416    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:01:23.382416    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700 ).networkadapters[0]).ipaddresses[0]
	I0807 20:01:26.048404    1172 main.go:141] libmachine: [stdout =====>] : 172.28.226.95
	
	I0807 20:01:26.048404    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:01:26.049688    1172 sshutil.go:53] new ssh client: &{IP:172.28.226.95 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-116700\id_rsa Username:docker}
	I0807 20:01:26.165866    1172 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.0300853s)
	I0807 20:01:26.165866    1172 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0807 20:01:26.166571    1172 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0807 20:01:26.222997    1172 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0807 20:01:26.223813    1172 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0807 20:01:26.268398    1172 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0807 20:01:26.269380    1172 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0807 20:01:26.321927    1172 provision.go:87] duration metric: took 15.0880571s to configureAuth
	I0807 20:01:26.321927    1172 buildroot.go:189] setting minikube options for container-runtime
	I0807 20:01:26.322777    1172 config.go:182] Loaded profile config "multinode-116700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 20:01:26.322777    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700 ).state
	I0807 20:01:28.636835    1172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 20:01:28.636988    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:01:28.637043    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700 ).networkadapters[0]).ipaddresses[0]
	I0807 20:01:31.395949    1172 main.go:141] libmachine: [stdout =====>] : 172.28.226.95
	
	I0807 20:01:31.395949    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:01:31.402593    1172 main.go:141] libmachine: Using SSH client type: native
	I0807 20:01:31.403397    1172 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.226.95 22 <nil> <nil>}
	I0807 20:01:31.403397    1172 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0807 20:01:31.532267    1172 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0807 20:01:31.532406    1172 buildroot.go:70] root file system type: tmpfs
	I0807 20:01:31.532689    1172 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0807 20:01:31.532780    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700 ).state
	I0807 20:01:33.827761    1172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 20:01:33.828069    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:01:33.828159    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700 ).networkadapters[0]).ipaddresses[0]
	I0807 20:01:36.590763    1172 main.go:141] libmachine: [stdout =====>] : 172.28.226.95
	
	I0807 20:01:36.590763    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:01:36.596978    1172 main.go:141] libmachine: Using SSH client type: native
	I0807 20:01:36.597756    1172 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.226.95 22 <nil> <nil>}
	I0807 20:01:36.597756    1172 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0807 20:01:36.750943    1172 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0807 20:01:36.751059    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700 ).state
	I0807 20:01:39.108522    1172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 20:01:39.108522    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:01:39.109412    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700 ).networkadapters[0]).ipaddresses[0]
	I0807 20:01:41.817739    1172 main.go:141] libmachine: [stdout =====>] : 172.28.226.95
	
	I0807 20:01:41.817739    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:01:41.823991    1172 main.go:141] libmachine: Using SSH client type: native
	I0807 20:01:41.824680    1172 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.226.95 22 <nil> <nil>}
	I0807 20:01:41.824680    1172 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0807 20:01:44.481944    1172 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0807 20:01:44.481944    1172 machine.go:97] duration metric: took 48.3678652s to provisionDockerMachine
	I0807 20:01:44.481944    1172 start.go:293] postStartSetup for "multinode-116700" (driver="hyperv")
	I0807 20:01:44.481944    1172 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0807 20:01:44.495249    1172 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0807 20:01:44.495249    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700 ).state
	I0807 20:01:46.673420    1172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 20:01:46.673420    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:01:46.673420    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700 ).networkadapters[0]).ipaddresses[0]
	I0807 20:01:49.339449    1172 main.go:141] libmachine: [stdout =====>] : 172.28.226.95
	
	I0807 20:01:49.340512    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:01:49.341342    1172 sshutil.go:53] new ssh client: &{IP:172.28.226.95 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-116700\id_rsa Username:docker}
	I0807 20:01:49.442382    1172 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9468896s)
	I0807 20:01:49.455670    1172 ssh_runner.go:195] Run: cat /etc/os-release
	I0807 20:01:49.462046    1172 command_runner.go:130] > NAME=Buildroot
	I0807 20:01:49.462046    1172 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0807 20:01:49.462046    1172 command_runner.go:130] > ID=buildroot
	I0807 20:01:49.462046    1172 command_runner.go:130] > VERSION_ID=2023.02.9
	I0807 20:01:49.462046    1172 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0807 20:01:49.462257    1172 info.go:137] Remote host: Buildroot 2023.02.9
	I0807 20:01:49.462363    1172 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0807 20:01:49.462857    1172 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0807 20:01:49.463770    1172 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96602.pem -> 96602.pem in /etc/ssl/certs
	I0807 20:01:49.463839    1172 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96602.pem -> /etc/ssl/certs/96602.pem
	I0807 20:01:49.475789    1172 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0807 20:01:49.492110    1172 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96602.pem --> /etc/ssl/certs/96602.pem (1708 bytes)
	I0807 20:01:49.539608    1172 start.go:296] duration metric: took 5.0575985s for postStartSetup
	I0807 20:01:49.539661    1172 fix.go:56] duration metric: took 1m33.3901884s for fixHost
	I0807 20:01:49.539854    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700 ).state
	I0807 20:01:51.735819    1172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 20:01:51.735819    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:01:51.736786    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700 ).networkadapters[0]).ipaddresses[0]
	I0807 20:01:54.361813    1172 main.go:141] libmachine: [stdout =====>] : 172.28.226.95
	
	I0807 20:01:54.361813    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:01:54.367767    1172 main.go:141] libmachine: Using SSH client type: native
	I0807 20:01:54.368359    1172 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.226.95 22 <nil> <nil>}
	I0807 20:01:54.368497    1172 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0807 20:01:54.489595    1172 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723060914.509642655
	
	I0807 20:01:54.489595    1172 fix.go:216] guest clock: 1723060914.509642655
	I0807 20:01:54.489595    1172 fix.go:229] Guest: 2024-08-07 20:01:54.509642655 +0000 UTC Remote: 2024-08-07 20:01:49.5397594 +0000 UTC m=+99.596668501 (delta=4.969883255s)
	I0807 20:01:54.489795    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700 ).state
	I0807 20:01:56.673033    1172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 20:01:56.673033    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:01:56.673405    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700 ).networkadapters[0]).ipaddresses[0]
	I0807 20:01:59.361130    1172 main.go:141] libmachine: [stdout =====>] : 172.28.226.95
	
	I0807 20:01:59.361850    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:01:59.367136    1172 main.go:141] libmachine: Using SSH client type: native
	I0807 20:01:59.367677    1172 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.226.95 22 <nil> <nil>}
	I0807 20:01:59.367677    1172 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1723060914
	I0807 20:01:59.509330    1172 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Aug  7 20:01:54 UTC 2024
	
	I0807 20:01:59.509330    1172 fix.go:236] clock set: Wed Aug  7 20:01:54 UTC 2024
	 (err=<nil>)
	I0807 20:01:59.509330    1172 start.go:83] releasing machines lock for "multinode-116700", held for 1m43.3597303s
	I0807 20:01:59.509951    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700 ).state
	I0807 20:02:01.692427    1172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 20:02:01.692553    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:02:01.692553    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700 ).networkadapters[0]).ipaddresses[0]
	I0807 20:02:04.315212    1172 main.go:141] libmachine: [stdout =====>] : 172.28.226.95
	
	I0807 20:02:04.315212    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:02:04.319274    1172 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0807 20:02:04.319274    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700 ).state
	I0807 20:02:04.329957    1172 ssh_runner.go:195] Run: cat /version.json
	I0807 20:02:04.330764    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700 ).state
	I0807 20:02:06.604118    1172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 20:02:06.604118    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:02:06.604118    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700 ).networkadapters[0]).ipaddresses[0]
	I0807 20:02:06.620664    1172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 20:02:06.620664    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:02:06.621606    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700 ).networkadapters[0]).ipaddresses[0]
	I0807 20:02:09.382467    1172 main.go:141] libmachine: [stdout =====>] : 172.28.226.95
	
	I0807 20:02:09.383226    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:02:09.383904    1172 sshutil.go:53] new ssh client: &{IP:172.28.226.95 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-116700\id_rsa Username:docker}
	I0807 20:02:09.404504    1172 main.go:141] libmachine: [stdout =====>] : 172.28.226.95
	
	I0807 20:02:09.404504    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:02:09.405082    1172 sshutil.go:53] new ssh client: &{IP:172.28.226.95 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-116700\id_rsa Username:docker}
	I0807 20:02:09.478536    1172 command_runner.go:130] > {"iso_version": "v1.33.1-1722248113-19339", "kicbase_version": "v0.0.44-1721902582-19326", "minikube_version": "v1.33.1", "commit": "b8389556a97747a5bbaa1906d238251ad536d76e"}
	I0807 20:02:09.478657    1172 ssh_runner.go:235] Completed: cat /version.json: (5.1479826s)
	I0807 20:02:09.491319    1172 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0807 20:02:09.492517    1172 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.1731771s)
	W0807 20:02:09.492517    1172 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0807 20:02:09.495298    1172 ssh_runner.go:195] Run: systemctl --version
	I0807 20:02:09.506461    1172 command_runner.go:130] > systemd 252 (252)
	I0807 20:02:09.506461    1172 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0807 20:02:09.520033    1172 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0807 20:02:09.533683    1172 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0807 20:02:09.533811    1172 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0807 20:02:09.546640    1172 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0807 20:02:09.577504    1172 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0807 20:02:09.577944    1172 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0807 20:02:09.577944    1172 start.go:495] detecting cgroup driver to use...
	I0807 20:02:09.578364    1172 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0807 20:02:09.614371    1172 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0807 20:02:09.627908    1172 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0807 20:02:09.659545    1172 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0807 20:02:09.680172    1172 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0807 20:02:09.695965    1172 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0807 20:02:09.728498    1172 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0807 20:02:09.760768    1172 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0807 20:02:09.791840    1172 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0807 20:02:09.821453    1172 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0807 20:02:09.853626    1172 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0807 20:02:09.883121    1172 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0807 20:02:09.915213    1172 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0807 20:02:09.946534    1172 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0807 20:02:09.964465    1172 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0807 20:02:09.976623    1172 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0807 20:02:10.006278    1172 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 20:02:10.232604    1172 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0807 20:02:10.266321    1172 start.go:495] detecting cgroup driver to use...
	I0807 20:02:10.283268    1172 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0807 20:02:10.309588    1172 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0807 20:02:10.309588    1172 command_runner.go:130] > [Unit]
	I0807 20:02:10.309588    1172 command_runner.go:130] > Description=Docker Application Container Engine
	I0807 20:02:10.309588    1172 command_runner.go:130] > Documentation=https://docs.docker.com
	I0807 20:02:10.309588    1172 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0807 20:02:10.309588    1172 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0807 20:02:10.309588    1172 command_runner.go:130] > StartLimitBurst=3
	I0807 20:02:10.309588    1172 command_runner.go:130] > StartLimitIntervalSec=60
	I0807 20:02:10.309588    1172 command_runner.go:130] > [Service]
	I0807 20:02:10.309588    1172 command_runner.go:130] > Type=notify
	I0807 20:02:10.309588    1172 command_runner.go:130] > Restart=on-failure
	I0807 20:02:10.309588    1172 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0807 20:02:10.309828    1172 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0807 20:02:10.309828    1172 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0807 20:02:10.309828    1172 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0807 20:02:10.309828    1172 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0807 20:02:10.309828    1172 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0807 20:02:10.309828    1172 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0807 20:02:10.309963    1172 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0807 20:02:10.309963    1172 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0807 20:02:10.309963    1172 command_runner.go:130] > ExecStart=
	I0807 20:02:10.309963    1172 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0807 20:02:10.309963    1172 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0807 20:02:10.309963    1172 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0807 20:02:10.310089    1172 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0807 20:02:10.310089    1172 command_runner.go:130] > LimitNOFILE=infinity
	I0807 20:02:10.310089    1172 command_runner.go:130] > LimitNPROC=infinity
	I0807 20:02:10.310089    1172 command_runner.go:130] > LimitCORE=infinity
	I0807 20:02:10.310089    1172 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0807 20:02:10.310089    1172 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0807 20:02:10.310089    1172 command_runner.go:130] > TasksMax=infinity
	I0807 20:02:10.310089    1172 command_runner.go:130] > TimeoutStartSec=0
	I0807 20:02:10.310089    1172 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0807 20:02:10.310089    1172 command_runner.go:130] > Delegate=yes
	I0807 20:02:10.310203    1172 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0807 20:02:10.310226    1172 command_runner.go:130] > KillMode=process
	I0807 20:02:10.310226    1172 command_runner.go:130] > [Install]
	I0807 20:02:10.310226    1172 command_runner.go:130] > WantedBy=multi-user.target
	I0807 20:02:10.322608    1172 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0807 20:02:10.358912    1172 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0807 20:02:10.405593    1172 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0807 20:02:10.441549    1172 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0807 20:02:10.473060    1172 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	W0807 20:02:10.543555    1172 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0807 20:02:10.543555    1172 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0807 20:02:10.546508    1172 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0807 20:02:10.572162    1172 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0807 20:02:10.609804    1172 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0807 20:02:10.622535    1172 ssh_runner.go:195] Run: which cri-dockerd
	I0807 20:02:10.628457    1172 command_runner.go:130] > /usr/bin/cri-dockerd
	I0807 20:02:10.639874    1172 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0807 20:02:10.657182    1172 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0807 20:02:10.705090    1172 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0807 20:02:10.906846    1172 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0807 20:02:11.095746    1172 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0807 20:02:11.096131    1172 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0807 20:02:11.144438    1172 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 20:02:11.346499    1172 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0807 20:02:14.064580    1172 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.7179767s)
	I0807 20:02:14.077726    1172 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0807 20:02:14.116085    1172 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0807 20:02:14.151561    1172 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0807 20:02:14.371765    1172 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0807 20:02:14.578435    1172 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 20:02:14.778375    1172 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0807 20:02:14.828395    1172 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0807 20:02:14.871851    1172 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 20:02:15.091292    1172 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0807 20:02:15.194467    1172 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0807 20:02:15.207739    1172 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0807 20:02:15.215931    1172 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0807 20:02:15.216054    1172 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0807 20:02:15.216054    1172 command_runner.go:130] > Device: 0,22	Inode: 850         Links: 1
	I0807 20:02:15.216054    1172 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0807 20:02:15.216112    1172 command_runner.go:130] > Access: 2024-08-07 20:02:15.135634566 +0000
	I0807 20:02:15.216144    1172 command_runner.go:130] > Modify: 2024-08-07 20:02:15.135634566 +0000
	I0807 20:02:15.216144    1172 command_runner.go:130] > Change: 2024-08-07 20:02:15.140634576 +0000
	I0807 20:02:15.216144    1172 command_runner.go:130] >  Birth: -
	I0807 20:02:15.216769    1172 start.go:563] Will wait 60s for crictl version
	I0807 20:02:15.228888    1172 ssh_runner.go:195] Run: which crictl
	I0807 20:02:15.233902    1172 command_runner.go:130] > /usr/bin/crictl
	I0807 20:02:15.245796    1172 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0807 20:02:15.299372    1172 command_runner.go:130] > Version:  0.1.0
	I0807 20:02:15.299372    1172 command_runner.go:130] > RuntimeName:  docker
	I0807 20:02:15.299372    1172 command_runner.go:130] > RuntimeVersion:  27.1.1
	I0807 20:02:15.299372    1172 command_runner.go:130] > RuntimeApiVersion:  v1
	I0807 20:02:15.299372    1172 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.1
	RuntimeApiVersion:  v1
	I0807 20:02:15.309158    1172 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0807 20:02:15.341827    1172 command_runner.go:130] > 27.1.1
	I0807 20:02:15.351138    1172 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0807 20:02:15.381062    1172 command_runner.go:130] > 27.1.1
	I0807 20:02:15.386326    1172 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...
	I0807 20:02:15.387041    1172 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0807 20:02:15.391449    1172 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0807 20:02:15.391449    1172 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0807 20:02:15.391449    1172 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0807 20:02:15.391449    1172 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:f6:3a:6a Flags:up|broadcast|multicast|running}
	I0807 20:02:15.393439    1172 ip.go:210] interface addr: fe80::e7eb:b592:d388:ff99/64
	I0807 20:02:15.394439    1172 ip.go:210] interface addr: 172.28.224.1/20
	I0807 20:02:15.404453    1172 ssh_runner.go:195] Run: grep 172.28.224.1	host.minikube.internal$ /etc/hosts
	I0807 20:02:15.412163    1172 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.28.224.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0807 20:02:15.434123    1172 kubeadm.go:883] updating cluster {Name:multinode-116700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.3 ClusterName:multinode-116700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.226.95 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.226.55 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.28.226.146 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingres
s-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0807 20:02:15.434680    1172 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0807 20:02:15.444525    1172 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0807 20:02:15.470479    1172 command_runner.go:130] > kindest/kindnetd:v20240730-75a5af0c
	I0807 20:02:15.470479    1172 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.3
	I0807 20:02:15.470479    1172 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.3
	I0807 20:02:15.470479    1172 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.3
	I0807 20:02:15.470479    1172 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.3
	I0807 20:02:15.470479    1172 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0807 20:02:15.470479    1172 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0807 20:02:15.470479    1172 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0807 20:02:15.470479    1172 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0807 20:02:15.470479    1172 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0807 20:02:15.470479    1172 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240730-75a5af0c
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0807 20:02:15.470479    1172 docker.go:615] Images already preloaded, skipping extraction
	I0807 20:02:15.480873    1172 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0807 20:02:15.505892    1172 command_runner.go:130] > kindest/kindnetd:v20240730-75a5af0c
	I0807 20:02:15.505892    1172 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.3
	I0807 20:02:15.505892    1172 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.3
	I0807 20:02:15.506917    1172 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.3
	I0807 20:02:15.506917    1172 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.3
	I0807 20:02:15.506917    1172 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0807 20:02:15.506917    1172 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0807 20:02:15.506917    1172 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0807 20:02:15.506917    1172 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0807 20:02:15.506917    1172 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0807 20:02:15.506917    1172 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240730-75a5af0c
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0807 20:02:15.506917    1172 cache_images.go:84] Images are preloaded, skipping loading
	I0807 20:02:15.506917    1172 kubeadm.go:934] updating node { 172.28.226.95 8443 v1.30.3 docker true true} ...
	I0807 20:02:15.506917    1172 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-116700 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.28.226.95
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-116700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0807 20:02:15.514888    1172 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0807 20:02:15.587485    1172 command_runner.go:130] > cgroupfs
	I0807 20:02:15.587949    1172 cni.go:84] Creating CNI manager for ""
	I0807 20:02:15.587949    1172 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0807 20:02:15.587949    1172 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0807 20:02:15.588016    1172 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.28.226.95 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-116700 NodeName:multinode-116700 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.28.226.95"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.28.226.95 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0807 20:02:15.588081    1172 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.28.226.95
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-116700"
	  kubeletExtraArgs:
	    node-ip: 172.28.226.95
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.28.226.95"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0807 20:02:15.599195    1172 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0807 20:02:15.618182    1172 command_runner.go:130] > kubeadm
	I0807 20:02:15.618182    1172 command_runner.go:130] > kubectl
	I0807 20:02:15.618182    1172 command_runner.go:130] > kubelet
	I0807 20:02:15.619191    1172 binaries.go:44] Found k8s binaries, skipping transfer
	I0807 20:02:15.629194    1172 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0807 20:02:15.647235    1172 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0807 20:02:15.678584    1172 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0807 20:02:15.708429    1172 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0807 20:02:15.754528    1172 ssh_runner.go:195] Run: grep 172.28.226.95	control-plane.minikube.internal$ /etc/hosts
	I0807 20:02:15.760235    1172 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.28.226.95	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0807 20:02:15.790352    1172 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 20:02:15.989188    1172 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0807 20:02:16.018324    1172 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700 for IP: 172.28.226.95
	I0807 20:02:16.018324    1172 certs.go:194] generating shared ca certs ...
	I0807 20:02:16.018324    1172 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 20:02:16.019132    1172 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0807 20:02:16.019568    1172 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0807 20:02:16.019568    1172 certs.go:256] generating profile certs ...
	I0807 20:02:16.020293    1172 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\client.key
	I0807 20:02:16.020507    1172 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\apiserver.key.df661a70
	I0807 20:02:16.020507    1172 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\apiserver.crt.df661a70 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.28.226.95]
	I0807 20:02:16.264211    1172 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\apiserver.crt.df661a70 ...
	I0807 20:02:16.264211    1172 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\apiserver.crt.df661a70: {Name:mka21d5154a09762fea20bdb9ae90f9f716422d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 20:02:16.264756    1172 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\apiserver.key.df661a70 ...
	I0807 20:02:16.265772    1172 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\apiserver.key.df661a70: {Name:mk0a2c275254f84e3f2c77c6561fdb3c054cf975 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 20:02:16.266082    1172 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\apiserver.crt.df661a70 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\apiserver.crt
	I0807 20:02:16.279860    1172 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\apiserver.key.df661a70 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\apiserver.key
	I0807 20:02:16.281809    1172 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\proxy-client.key
	I0807 20:02:16.281809    1172 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0807 20:02:16.282023    1172 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0807 20:02:16.282284    1172 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0807 20:02:16.282492    1172 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0807 20:02:16.282819    1172 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0807 20:02:16.283046    1172 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0807 20:02:16.283143    1172 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0807 20:02:16.283276    1172 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0807 20:02:16.283921    1172 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9660.pem (1338 bytes)
	W0807 20:02:16.283921    1172 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9660_empty.pem, impossibly tiny 0 bytes
	I0807 20:02:16.283921    1172 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0807 20:02:16.284613    1172 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0807 20:02:16.284819    1172 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0807 20:02:16.284819    1172 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0807 20:02:16.285700    1172 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96602.pem (1708 bytes)
	I0807 20:02:16.285945    1172 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0807 20:02:16.286109    1172 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9660.pem -> /usr/share/ca-certificates/9660.pem
	I0807 20:02:16.286323    1172 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96602.pem -> /usr/share/ca-certificates/96602.pem
	I0807 20:02:16.287558    1172 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0807 20:02:16.342059    1172 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0807 20:02:16.389999    1172 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0807 20:02:16.440918    1172 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0807 20:02:16.489939    1172 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0807 20:02:16.537204    1172 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0807 20:02:16.583348    1172 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0807 20:02:16.629678    1172 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0807 20:02:16.675675    1172 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0807 20:02:16.722020    1172 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9660.pem --> /usr/share/ca-certificates/9660.pem (1338 bytes)
	I0807 20:02:16.766024    1172 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96602.pem --> /usr/share/ca-certificates/96602.pem (1708 bytes)
	I0807 20:02:16.811014    1172 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0807 20:02:16.860479    1172 ssh_runner.go:195] Run: openssl version
	I0807 20:02:16.869388    1172 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0807 20:02:16.882193    1172 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0807 20:02:16.911911    1172 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0807 20:02:16.919343    1172 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug  7 17:33 /usr/share/ca-certificates/minikubeCA.pem
	I0807 20:02:16.919437    1172 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  7 17:33 /usr/share/ca-certificates/minikubeCA.pem
	I0807 20:02:16.931265    1172 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0807 20:02:16.940164    1172 command_runner.go:130] > b5213941
	I0807 20:02:16.951969    1172 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0807 20:02:16.984942    1172 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9660.pem && ln -fs /usr/share/ca-certificates/9660.pem /etc/ssl/certs/9660.pem"
	I0807 20:02:17.018400    1172 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9660.pem
	I0807 20:02:17.026330    1172 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug  7 17:50 /usr/share/ca-certificates/9660.pem
	I0807 20:02:17.026330    1172 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  7 17:50 /usr/share/ca-certificates/9660.pem
	I0807 20:02:17.038831    1172 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9660.pem
	I0807 20:02:17.047660    1172 command_runner.go:130] > 51391683
	I0807 20:02:17.062636    1172 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9660.pem /etc/ssl/certs/51391683.0"
	I0807 20:02:17.094881    1172 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/96602.pem && ln -fs /usr/share/ca-certificates/96602.pem /etc/ssl/certs/96602.pem"
	I0807 20:02:17.125951    1172 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/96602.pem
	I0807 20:02:17.133000    1172 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug  7 17:50 /usr/share/ca-certificates/96602.pem
	I0807 20:02:17.133000    1172 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  7 17:50 /usr/share/ca-certificates/96602.pem
	I0807 20:02:17.146073    1172 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/96602.pem
	I0807 20:02:17.156012    1172 command_runner.go:130] > 3ec20f2e
	I0807 20:02:17.168183    1172 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/96602.pem /etc/ssl/certs/3ec20f2e.0"
	I0807 20:02:17.197874    1172 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0807 20:02:17.204301    1172 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0807 20:02:17.204445    1172 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0807 20:02:17.204531    1172 command_runner.go:130] > Device: 8,1	Inode: 2102098     Links: 1
	I0807 20:02:17.204531    1172 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0807 20:02:17.204531    1172 command_runner.go:130] > Access: 2024-08-07 19:37:26.697218980 +0000
	I0807 20:02:17.204615    1172 command_runner.go:130] > Modify: 2024-08-07 19:37:26.697218980 +0000
	I0807 20:02:17.204615    1172 command_runner.go:130] > Change: 2024-08-07 19:37:26.697218980 +0000
	I0807 20:02:17.204615    1172 command_runner.go:130] >  Birth: 2024-08-07 19:37:26.697218980 +0000
	I0807 20:02:17.215873    1172 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0807 20:02:17.224955    1172 command_runner.go:130] > Certificate will not expire
	I0807 20:02:17.237117    1172 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0807 20:02:17.247884    1172 command_runner.go:130] > Certificate will not expire
	I0807 20:02:17.263407    1172 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0807 20:02:17.274406    1172 command_runner.go:130] > Certificate will not expire
	I0807 20:02:17.286816    1172 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0807 20:02:17.297709    1172 command_runner.go:130] > Certificate will not expire
	I0807 20:02:17.312002    1172 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0807 20:02:17.323195    1172 command_runner.go:130] > Certificate will not expire
	I0807 20:02:17.335664    1172 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0807 20:02:17.345329    1172 command_runner.go:130] > Certificate will not expire
	I0807 20:02:17.345753    1172 kubeadm.go:392] StartCluster: {Name:multinode-116700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.3 ClusterName:multinode-116700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.226.95 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.226.55 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.28.226.146 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 20:02:17.355518    1172 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0807 20:02:17.393969    1172 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0807 20:02:17.413939    1172 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0807 20:02:17.413939    1172 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0807 20:02:17.413939    1172 command_runner.go:130] > /var/lib/minikube/etcd:
	I0807 20:02:17.413939    1172 command_runner.go:130] > member
	I0807 20:02:17.413939    1172 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0807 20:02:17.413939    1172 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0807 20:02:17.425718    1172 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0807 20:02:17.445077    1172 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0807 20:02:17.446305    1172 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-116700" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0807 20:02:17.446901    1172 kubeconfig.go:62] C:\Users\jenkins.minikube6\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "multinode-116700" cluster setting kubeconfig missing "multinode-116700" context setting]
	I0807 20:02:17.447841    1172 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 20:02:17.463405    1172 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0807 20:02:17.464071    1172 kapi.go:59] client config for multinode-116700: &rest.Config{Host:"https://172.28.226.95:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-116700/client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-116700/client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData
:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1da64c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0807 20:02:17.465785    1172 cert_rotation.go:137] Starting client certificate rotation controller
	I0807 20:02:17.477135    1172 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0807 20:02:17.495596    1172 command_runner.go:130] > --- /var/tmp/minikube/kubeadm.yaml
	I0807 20:02:17.495693    1172 command_runner.go:130] > +++ /var/tmp/minikube/kubeadm.yaml.new
	I0807 20:02:17.495693    1172 command_runner.go:130] > @@ -1,7 +1,7 @@
	I0807 20:02:17.495693    1172 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0807 20:02:17.495693    1172 command_runner.go:130] >  kind: InitConfiguration
	I0807 20:02:17.495693    1172 command_runner.go:130] >  localAPIEndpoint:
	I0807 20:02:17.495693    1172 command_runner.go:130] > -  advertiseAddress: 172.28.224.86
	I0807 20:02:17.495693    1172 command_runner.go:130] > +  advertiseAddress: 172.28.226.95
	I0807 20:02:17.495693    1172 command_runner.go:130] >    bindPort: 8443
	I0807 20:02:17.495693    1172 command_runner.go:130] >  bootstrapTokens:
	I0807 20:02:17.495693    1172 command_runner.go:130] >    - groups:
	I0807 20:02:17.495693    1172 command_runner.go:130] > @@ -14,13 +14,13 @@
	I0807 20:02:17.495693    1172 command_runner.go:130] >    criSocket: unix:///var/run/cri-dockerd.sock
	I0807 20:02:17.495693    1172 command_runner.go:130] >    name: "multinode-116700"
	I0807 20:02:17.495693    1172 command_runner.go:130] >    kubeletExtraArgs:
	I0807 20:02:17.495693    1172 command_runner.go:130] > -    node-ip: 172.28.224.86
	I0807 20:02:17.495693    1172 command_runner.go:130] > +    node-ip: 172.28.226.95
	I0807 20:02:17.495693    1172 command_runner.go:130] >    taints: []
	I0807 20:02:17.495693    1172 command_runner.go:130] >  ---
	I0807 20:02:17.495693    1172 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0807 20:02:17.495693    1172 command_runner.go:130] >  kind: ClusterConfiguration
	I0807 20:02:17.495693    1172 command_runner.go:130] >  apiServer:
	I0807 20:02:17.495693    1172 command_runner.go:130] > -  certSANs: ["127.0.0.1", "localhost", "172.28.224.86"]
	I0807 20:02:17.495693    1172 command_runner.go:130] > +  certSANs: ["127.0.0.1", "localhost", "172.28.226.95"]
	I0807 20:02:17.495693    1172 command_runner.go:130] >    extraArgs:
	I0807 20:02:17.495693    1172 command_runner.go:130] >      enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	I0807 20:02:17.495693    1172 command_runner.go:130] >  controllerManager:
	I0807 20:02:17.495693    1172 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,7 +1,7 @@
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	-  advertiseAddress: 172.28.224.86
	+  advertiseAddress: 172.28.226.95
	   bindPort: 8443
	 bootstrapTokens:
	   - groups:
	@@ -14,13 +14,13 @@
	   criSocket: unix:///var/run/cri-dockerd.sock
	   name: "multinode-116700"
	   kubeletExtraArgs:
	-    node-ip: 172.28.224.86
	+    node-ip: 172.28.226.95
	   taints: []
	 ---
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	-  certSANs: ["127.0.0.1", "localhost", "172.28.224.86"]
	+  certSANs: ["127.0.0.1", "localhost", "172.28.226.95"]
	   extraArgs:
	     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	
	-- /stdout --
	I0807 20:02:17.495693    1172 kubeadm.go:1160] stopping kube-system containers ...
	I0807 20:02:17.506787    1172 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0807 20:02:17.537224    1172 command_runner.go:130] > 32f103de03d3
	I0807 20:02:17.537224    1172 command_runner.go:130] > b6325ae79a14
	I0807 20:02:17.537224    1172 command_runner.go:130] > d716d608049c
	I0807 20:02:17.537224    1172 command_runner.go:130] > 201691a17a92
	I0807 20:02:17.537224    1172 command_runner.go:130] > ec2579bb9d23
	I0807 20:02:17.537224    1172 command_runner.go:130] > 3b896a77f546
	I0807 20:02:17.537224    1172 command_runner.go:130] > 9fd565bc6207
	I0807 20:02:17.537224    1172 command_runner.go:130] > 0877557fcf51
	I0807 20:02:17.537224    1172 command_runner.go:130] > 1415d4256b4a
	I0807 20:02:17.537224    1172 command_runner.go:130] > c90df84145cb
	I0807 20:02:17.537224    1172 command_runner.go:130] > 1dbaa8c7ed69
	I0807 20:02:17.537224    1172 command_runner.go:130] > c50e3a9ac99f
	I0807 20:02:17.537224    1172 command_runner.go:130] > 548a9e3a6616
	I0807 20:02:17.537224    1172 command_runner.go:130] > 1e5d82deee2f
	I0807 20:02:17.537224    1172 command_runner.go:130] > 92cf9118dac2
	I0807 20:02:17.537224    1172 command_runner.go:130] > 3047b2dc6a14
	I0807 20:02:17.537388    1172 docker.go:483] Stopping containers: [32f103de03d3 b6325ae79a14 d716d608049c 201691a17a92 ec2579bb9d23 3b896a77f546 9fd565bc6207 0877557fcf51 1415d4256b4a c90df84145cb 1dbaa8c7ed69 c50e3a9ac99f 548a9e3a6616 1e5d82deee2f 92cf9118dac2 3047b2dc6a14]
	I0807 20:02:17.546876    1172 ssh_runner.go:195] Run: docker stop 32f103de03d3 b6325ae79a14 d716d608049c 201691a17a92 ec2579bb9d23 3b896a77f546 9fd565bc6207 0877557fcf51 1415d4256b4a c90df84145cb 1dbaa8c7ed69 c50e3a9ac99f 548a9e3a6616 1e5d82deee2f 92cf9118dac2 3047b2dc6a14
	I0807 20:02:17.576218    1172 command_runner.go:130] > 32f103de03d3
	I0807 20:02:17.576218    1172 command_runner.go:130] > b6325ae79a14
	I0807 20:02:17.576218    1172 command_runner.go:130] > d716d608049c
	I0807 20:02:17.576218    1172 command_runner.go:130] > 201691a17a92
	I0807 20:02:17.576310    1172 command_runner.go:130] > ec2579bb9d23
	I0807 20:02:17.576310    1172 command_runner.go:130] > 3b896a77f546
	I0807 20:02:17.576310    1172 command_runner.go:130] > 9fd565bc6207
	I0807 20:02:17.576310    1172 command_runner.go:130] > 0877557fcf51
	I0807 20:02:17.576310    1172 command_runner.go:130] > 1415d4256b4a
	I0807 20:02:17.576310    1172 command_runner.go:130] > c90df84145cb
	I0807 20:02:17.576310    1172 command_runner.go:130] > 1dbaa8c7ed69
	I0807 20:02:17.576310    1172 command_runner.go:130] > c50e3a9ac99f
	I0807 20:02:17.576310    1172 command_runner.go:130] > 548a9e3a6616
	I0807 20:02:17.576310    1172 command_runner.go:130] > 1e5d82deee2f
	I0807 20:02:17.576388    1172 command_runner.go:130] > 92cf9118dac2
	I0807 20:02:17.576388    1172 command_runner.go:130] > 3047b2dc6a14
	I0807 20:02:17.587065    1172 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0807 20:02:17.625386    1172 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0807 20:02:17.647951    1172 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0807 20:02:17.647951    1172 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0807 20:02:17.647951    1172 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0807 20:02:17.647951    1172 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0807 20:02:17.649099    1172 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0807 20:02:17.649099    1172 kubeadm.go:157] found existing configuration files:
	
	I0807 20:02:17.665648    1172 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0807 20:02:17.683748    1172 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0807 20:02:17.684726    1172 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0807 20:02:17.697108    1172 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0807 20:02:17.726660    1172 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0807 20:02:17.743539    1172 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0807 20:02:17.744361    1172 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0807 20:02:17.757232    1172 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0807 20:02:17.791668    1172 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0807 20:02:17.809486    1172 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0807 20:02:17.810301    1172 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0807 20:02:17.822324    1172 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0807 20:02:17.860335    1172 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0807 20:02:17.884084    1172 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0807 20:02:17.884084    1172 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0807 20:02:17.896560    1172 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0807 20:02:17.936336    1172 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0807 20:02:17.965255    1172 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0807 20:02:18.297003    1172 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0807 20:02:18.297003    1172 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0807 20:02:18.297090    1172 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0807 20:02:18.297090    1172 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0807 20:02:18.297090    1172 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0807 20:02:18.297090    1172 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0807 20:02:18.297090    1172 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0807 20:02:18.297090    1172 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0807 20:02:18.297090    1172 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0807 20:02:18.297090    1172 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0807 20:02:18.297090    1172 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0807 20:02:18.297090    1172 command_runner.go:130] > [certs] Using the existing "sa" key
	I0807 20:02:18.297251    1172 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0807 20:02:19.913651    1172 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0807 20:02:19.913801    1172 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0807 20:02:19.913801    1172 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0807 20:02:19.913801    1172 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0807 20:02:19.913801    1172 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0807 20:02:19.913801    1172 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0807 20:02:19.913801    1172 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.6165295s)
	I0807 20:02:19.913801    1172 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0807 20:02:20.249821    1172 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0807 20:02:20.249821    1172 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0807 20:02:20.249821    1172 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0807 20:02:20.249821    1172 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0807 20:02:20.354153    1172 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0807 20:02:20.354213    1172 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0807 20:02:20.354213    1172 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0807 20:02:20.354213    1172 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0807 20:02:20.354282    1172 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0807 20:02:20.459772    1172 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0807 20:02:20.459976    1172 api_server.go:52] waiting for apiserver process to appear ...
	I0807 20:02:20.472643    1172 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0807 20:02:20.981635    1172 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0807 20:02:21.491201    1172 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0807 20:02:21.977215    1172 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0807 20:02:22.484138    1172 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0807 20:02:22.513143    1172 command_runner.go:130] > 1971
	I0807 20:02:22.513143    1172 api_server.go:72] duration metric: took 2.0531407s to wait for apiserver process to appear ...
	I0807 20:02:22.513143    1172 api_server.go:88] waiting for apiserver healthz status ...
	I0807 20:02:22.513143    1172 api_server.go:253] Checking apiserver healthz at https://172.28.226.95:8443/healthz ...
	I0807 20:02:26.453833    1172 api_server.go:279] https://172.28.226.95:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0807 20:02:26.454320    1172 api_server.go:103] status: https://172.28.226.95:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0807 20:02:26.454320    1172 api_server.go:253] Checking apiserver healthz at https://172.28.226.95:8443/healthz ...
	I0807 20:02:26.512422    1172 api_server.go:279] https://172.28.226.95:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0807 20:02:26.512932    1172 api_server.go:103] status: https://172.28.226.95:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0807 20:02:26.519411    1172 api_server.go:253] Checking apiserver healthz at https://172.28.226.95:8443/healthz ...
	I0807 20:02:26.537948    1172 api_server.go:279] https://172.28.226.95:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0807 20:02:26.538511    1172 api_server.go:103] status: https://172.28.226.95:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0807 20:02:27.028033    1172 api_server.go:253] Checking apiserver healthz at https://172.28.226.95:8443/healthz ...
	I0807 20:02:27.035440    1172 api_server.go:279] https://172.28.226.95:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0807 20:02:27.035440    1172 api_server.go:103] status: https://172.28.226.95:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0807 20:02:27.516052    1172 api_server.go:253] Checking apiserver healthz at https://172.28.226.95:8443/healthz ...
	I0807 20:02:27.523779    1172 api_server.go:279] https://172.28.226.95:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0807 20:02:27.523779    1172 api_server.go:103] status: https://172.28.226.95:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0807 20:02:28.025705    1172 api_server.go:253] Checking apiserver healthz at https://172.28.226.95:8443/healthz ...
	I0807 20:02:28.036058    1172 api_server.go:279] https://172.28.226.95:8443/healthz returned 200:
	ok
	I0807 20:02:28.036522    1172 round_trippers.go:463] GET https://172.28.226.95:8443/version
	I0807 20:02:28.036587    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:28.036587    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:28.036666    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:28.047831    1172 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0807 20:02:28.048343    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:28.048343    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:28.048343    1172 round_trippers.go:580]     Content-Length: 263
	I0807 20:02:28.048343    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:28 GMT
	I0807 20:02:28.048343    1172 round_trippers.go:580]     Audit-Id: f3924de8-5cfe-44cd-ab6d-e8bdfbf1b0f7
	I0807 20:02:28.048343    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:28.048343    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:28.048343    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:28.048343    1172 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.3",
	  "gitCommit": "6fc0a69044f1ac4c13841ec4391224a2df241460",
	  "gitTreeState": "clean",
	  "buildDate": "2024-07-16T23:48:12Z",
	  "goVersion": "go1.22.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0807 20:02:28.048343    1172 api_server.go:141] control plane version: v1.30.3
	I0807 20:02:28.048343    1172 api_server.go:131] duration metric: took 5.5351293s to wait for apiserver health ...
	I0807 20:02:28.048343    1172 cni.go:84] Creating CNI manager for ""
	I0807 20:02:28.048343    1172 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0807 20:02:28.052357    1172 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0807 20:02:28.073555    1172 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0807 20:02:28.085709    1172 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0807 20:02:28.085768    1172 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I0807 20:02:28.085768    1172 command_runner.go:130] > Device: 0,17	Inode: 3500        Links: 1
	I0807 20:02:28.085768    1172 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0807 20:02:28.085768    1172 command_runner.go:130] > Access: 2024-08-07 20:00:47.586820200 +0000
	I0807 20:02:28.085768    1172 command_runner.go:130] > Modify: 2024-07-29 16:10:03.000000000 +0000
	I0807 20:02:28.085768    1172 command_runner.go:130] > Change: 2024-08-07 20:00:36.290000000 +0000
	I0807 20:02:28.085768    1172 command_runner.go:130] >  Birth: -
	I0807 20:02:28.085921    1172 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0807 20:02:28.085952    1172 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0807 20:02:28.141236    1172 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0807 20:02:29.525465    1172 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0807 20:02:29.525584    1172 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0807 20:02:29.525584    1172 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0807 20:02:29.525657    1172 command_runner.go:130] > daemonset.apps/kindnet configured
	I0807 20:02:29.525657    1172 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.3844034s)
	I0807 20:02:29.525741    1172 system_pods.go:43] waiting for kube-system pods to appear ...
	I0807 20:02:29.525976    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods
	I0807 20:02:29.526049    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:29.526049    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:29.526049    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:29.531854    1172 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 20:02:29.532212    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:29.532375    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:29.532418    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:29 GMT
	I0807 20:02:29.532699    1172 round_trippers.go:580]     Audit-Id: f8e79090-e8b1-412f-95d2-f43a2412224c
	I0807 20:02:29.532728    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:29.532728    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:29.532728    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:29.534124    1172 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1945"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-7l6v2","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7de73f9c-93d9-46c6-ae10-b253dd257a19","resourceVersion":"1920","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"26f3775d-afda-4408-b967-dc333bdd23fc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26f3775d-afda-4408-b967-dc333bdd23fc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 85415 chars]
	I0807 20:02:29.539969    1172 system_pods.go:59] 12 kube-system pods found
	I0807 20:02:29.539969    1172 system_pods.go:61] "coredns-7db6d8ff4d-7l6v2" [7de73f9c-93d9-46c6-ae10-b253dd257a19] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0807 20:02:29.539969    1172 system_pods.go:61] "etcd-multinode-116700" [822f1e63-7c8a-4172-927c-32f4e0b5d505] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0807 20:02:29.539969    1172 system_pods.go:61] "kindnet-gk542" [bad4e2c3-505e-4175-9a5b-186a1874ff8d] Running
	I0807 20:02:29.539969    1172 system_pods.go:61] "kindnet-gsjlq" [7dac93b0-0cfa-4d64-a437-ce92de8bf57d] Running
	I0807 20:02:29.539969    1172 system_pods.go:61] "kindnet-kltmx" [b2ddfdd4-b957-45e3-b967-cf2650e86069] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0807 20:02:29.539969    1172 system_pods.go:61] "kube-apiserver-multinode-116700" [5111ea6a-eb9d-4e60-bbc5-698a5882a60a] Pending
	I0807 20:02:29.539969    1172 system_pods.go:61] "kube-controller-manager-multinode-116700" [4d2e8250-9b12-4277-8834-515c1621fc78] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0807 20:02:29.539969    1172 system_pods.go:61] "kube-proxy-4lnjd" [254c1a93-f57b-4997-a3a1-d5f145f7c549] Running
	I0807 20:02:29.539969    1172 system_pods.go:61] "kube-proxy-fmjt9" [766df91e-8fd0-457b-8c11-8810059ca4d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0807 20:02:29.539969    1172 system_pods.go:61] "kube-proxy-vcb7n" [d8d87ad6-19cc-45fa-8c9f-1a862fec4e59] Running
	I0807 20:02:29.540991    1172 system_pods.go:61] "kube-scheduler-multinode-116700" [7b6df7b7-8c94-498a-bc4c-74d72efd572a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0807 20:02:29.540991    1172 system_pods.go:61] "storage-provisioner" [8a8036f6-f1a0-4fca-b8dd-ed99c3535b47] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0807 20:02:29.540991    1172 system_pods.go:74] duration metric: took 15.2504ms to wait for pod list to return data ...
	I0807 20:02:29.540991    1172 node_conditions.go:102] verifying NodePressure condition ...
	I0807 20:02:29.540991    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes
	I0807 20:02:29.540991    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:29.540991    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:29.540991    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:29.544983    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:02:29.544983    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:29.544983    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:29.544983    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:29.544983    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:29.544983    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:29 GMT
	I0807 20:02:29.544983    1172 round_trippers.go:580]     Audit-Id: 463c2353-57a7-42be-b54e-0c1b0dc0e14a
	I0807 20:02:29.544983    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:29.544983    1172 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1945"},"items":[{"metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"1871","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15629 chars]
	I0807 20:02:29.546973    1172 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0807 20:02:29.546973    1172 node_conditions.go:123] node cpu capacity is 2
	I0807 20:02:29.546973    1172 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0807 20:02:29.546973    1172 node_conditions.go:123] node cpu capacity is 2
	I0807 20:02:29.546973    1172 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0807 20:02:29.546973    1172 node_conditions.go:123] node cpu capacity is 2
	I0807 20:02:29.546973    1172 node_conditions.go:105] duration metric: took 5.9813ms to run NodePressure ...
	I0807 20:02:29.546973    1172 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0807 20:02:29.792569    1172 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0807 20:02:30.020575    1172 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0807 20:02:30.022756    1172 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0807 20:02:30.022829    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I0807 20:02:30.022829    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:30.022829    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:30.022829    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:30.029447    1172 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0807 20:02:30.029447    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:30.029447    1172 round_trippers.go:580]     Audit-Id: b4593179-8dd9-45f7-bd23-c9691a471adc
	I0807 20:02:30.029447    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:30.030001    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:30.030001    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:30.030001    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:30.030001    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:30 GMT
	I0807 20:02:30.030745    1172 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1953"},"items":[{"metadata":{"name":"etcd-multinode-116700","namespace":"kube-system","uid":"822f1e63-7c8a-4172-927c-32f4e0b5d505","resourceVersion":"1915","creationTimestamp":"2024-08-07T20:02:27Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.28.226.95:2379","kubernetes.io/config.hash":"9eecaca34ea754a7954ea8f568cb96d3","kubernetes.io/config.mirror":"9eecaca34ea754a7954ea8f568cb96d3","kubernetes.io/config.seen":"2024-08-07T20:02:20.493455845Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-07T20:02:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotatio
ns":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f: [truncated 30532 chars]
	I0807 20:02:30.032625    1172 kubeadm.go:739] kubelet initialised
	I0807 20:02:30.032682    1172 kubeadm.go:740] duration metric: took 9.9261ms waiting for restarted kubelet to initialise ...
	I0807 20:02:30.032682    1172 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0807 20:02:30.032880    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods
	I0807 20:02:30.032963    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:30.033003    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:30.033003    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:30.047837    1172 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0807 20:02:30.048841    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:30.048868    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:30.048868    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:30.048868    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:30.048868    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:30 GMT
	I0807 20:02:30.048868    1172 round_trippers.go:580]     Audit-Id: 24d91f64-0521-41a7-8e58-3bea71e46190
	I0807 20:02:30.048868    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:30.050835    1172 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1953"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-7l6v2","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7de73f9c-93d9-46c6-ae10-b253dd257a19","resourceVersion":"1920","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"26f3775d-afda-4408-b967-dc333bdd23fc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26f3775d-afda-4408-b967-dc333bdd23fc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 87137 chars]
	I0807 20:02:30.055832    1172 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-7l6v2" in "kube-system" namespace to be "Ready" ...
	I0807 20:02:30.055832    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7l6v2
	I0807 20:02:30.055832    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:30.055832    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:30.055832    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:30.059883    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:02:30.059983    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:30.059983    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:30.059983    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:30.059983    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:30.059983    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:30 GMT
	I0807 20:02:30.060048    1172 round_trippers.go:580]     Audit-Id: 77748950-3c22-45e9-9b70-55051db4480c
	I0807 20:02:30.060048    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:30.060241    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7l6v2","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7de73f9c-93d9-46c6-ae10-b253dd257a19","resourceVersion":"1920","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"26f3775d-afda-4408-b967-dc333bdd23fc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26f3775d-afda-4408-b967-dc333bdd23fc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0807 20:02:30.060777    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:30.060777    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:30.060777    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:30.060777    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:30.064145    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:02:30.064317    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:30.064317    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:30.064389    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:30 GMT
	I0807 20:02:30.064389    1172 round_trippers.go:580]     Audit-Id: 4e324599-44e8-4143-9aff-efb19274d3d0
	I0807 20:02:30.064389    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:30.064389    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:30.064389    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:30.064745    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"1871","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0807 20:02:30.065236    1172 pod_ready.go:97] node "multinode-116700" hosting pod "coredns-7db6d8ff4d-7l6v2" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-116700" has status "Ready":"False"
	I0807 20:02:30.065311    1172 pod_ready.go:81] duration metric: took 9.4787ms for pod "coredns-7db6d8ff4d-7l6v2" in "kube-system" namespace to be "Ready" ...
	E0807 20:02:30.065311    1172 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-116700" hosting pod "coredns-7db6d8ff4d-7l6v2" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-116700" has status "Ready":"False"
	I0807 20:02:30.065311    1172 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-116700" in "kube-system" namespace to be "Ready" ...
	I0807 20:02:30.065445    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-116700
	I0807 20:02:30.065445    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:30.065445    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:30.065445    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:30.067836    1172 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 20:02:30.067836    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:30.067836    1172 round_trippers.go:580]     Audit-Id: 61dc34d8-5edc-4753-8e4d-44cf0f3cc0a9
	I0807 20:02:30.067836    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:30.067836    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:30.067836    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:30.067836    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:30.068497    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:30 GMT
	I0807 20:02:30.068762    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-116700","namespace":"kube-system","uid":"822f1e63-7c8a-4172-927c-32f4e0b5d505","resourceVersion":"1915","creationTimestamp":"2024-08-07T20:02:27Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.28.226.95:2379","kubernetes.io/config.hash":"9eecaca34ea754a7954ea8f568cb96d3","kubernetes.io/config.mirror":"9eecaca34ea754a7954ea8f568cb96d3","kubernetes.io/config.seen":"2024-08-07T20:02:20.493455845Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-07T20:02:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6384 chars]
	I0807 20:02:30.069019    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:30.069019    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:30.069019    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:30.069019    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:30.072653    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:02:30.072653    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:30.072653    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:30.072653    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:30.072653    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:30.072653    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:30.072653    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:30 GMT
	I0807 20:02:30.072653    1172 round_trippers.go:580]     Audit-Id: cbaf0484-bed7-47a3-9145-2d34c6335afd
	I0807 20:02:30.072653    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"1871","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0807 20:02:30.072653    1172 pod_ready.go:97] node "multinode-116700" hosting pod "etcd-multinode-116700" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-116700" has status "Ready":"False"
	I0807 20:02:30.072653    1172 pod_ready.go:81] duration metric: took 7.3415ms for pod "etcd-multinode-116700" in "kube-system" namespace to be "Ready" ...
	E0807 20:02:30.072653    1172 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-116700" hosting pod "etcd-multinode-116700" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-116700" has status "Ready":"False"
	I0807 20:02:30.072653    1172 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-116700" in "kube-system" namespace to be "Ready" ...
	I0807 20:02:30.072653    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-116700
	I0807 20:02:30.072653    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:30.072653    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:30.072653    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:30.076647    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:02:30.076647    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:30.076647    1172 round_trippers.go:580]     Audit-Id: 4caaeb1b-7e00-4d7e-be2a-b0f5a9c93bf9
	I0807 20:02:30.076647    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:30.076647    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:30.076647    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:30.076647    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:30.076936    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:30 GMT
	I0807 20:02:30.076991    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-116700","namespace":"kube-system","uid":"5111ea6a-eb9d-4e60-bbc5-698a5882a60a","resourceVersion":"1949","creationTimestamp":"2024-08-07T20:02:28Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.28.226.95:8443","kubernetes.io/config.hash":"8066c637edc34431d2657878d0b69f79","kubernetes.io/config.mirror":"8066c637edc34431d2657878d0b69f79","kubernetes.io/config.seen":"2024-08-07T20:02:20.432683231Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-07T20:02:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7939 chars]
	I0807 20:02:30.077618    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:30.077691    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:30.077691    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:30.077691    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:30.084374    1172 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0807 20:02:30.084374    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:30.084374    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:30.084374    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:30.084374    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:30 GMT
	I0807 20:02:30.084374    1172 round_trippers.go:580]     Audit-Id: d5e3dd00-4fa0-449e-b6ea-b58355d25614
	I0807 20:02:30.084374    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:30.084374    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:30.085525    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"1871","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0807 20:02:30.085684    1172 pod_ready.go:97] node "multinode-116700" hosting pod "kube-apiserver-multinode-116700" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-116700" has status "Ready":"False"
	I0807 20:02:30.085684    1172 pod_ready.go:81] duration metric: took 13.0317ms for pod "kube-apiserver-multinode-116700" in "kube-system" namespace to be "Ready" ...
	E0807 20:02:30.085684    1172 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-116700" hosting pod "kube-apiserver-multinode-116700" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-116700" has status "Ready":"False"
	I0807 20:02:30.085684    1172 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-116700" in "kube-system" namespace to be "Ready" ...
	I0807 20:02:30.085684    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-116700
	I0807 20:02:30.085684    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:30.085684    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:30.085684    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:30.091265    1172 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 20:02:30.091265    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:30.091265    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:30.091265    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:30 GMT
	I0807 20:02:30.091265    1172 round_trippers.go:580]     Audit-Id: bbe13a2a-6226-427a-aef5-bc92cc438508
	I0807 20:02:30.091265    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:30.091265    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:30.091265    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:30.091265    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-116700","namespace":"kube-system","uid":"4d2e8250-9b12-4277-8834-515c1621fc78","resourceVersion":"1912","creationTimestamp":"2024-08-07T19:37:39Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ef62d358a9b469de2443e4a4f620921d","kubernetes.io/config.mirror":"ef62d358a9b469de2443e4a4f620921d","kubernetes.io/config.seen":"2024-08-07T19:37:39.552053960Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7732 chars]
	I0807 20:02:30.092248    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:30.092248    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:30.092248    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:30.092248    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:30.094292    1172 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 20:02:30.094292    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:30.094292    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:30.094292    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:30.094292    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:30.094292    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:30 GMT
	I0807 20:02:30.094292    1172 round_trippers.go:580]     Audit-Id: ffe0dec2-0e97-4714-bc8b-d1e91c5ce4ab
	I0807 20:02:30.094292    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:30.094292    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"1871","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0807 20:02:30.095265    1172 pod_ready.go:97] node "multinode-116700" hosting pod "kube-controller-manager-multinode-116700" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-116700" has status "Ready":"False"
	I0807 20:02:30.095265    1172 pod_ready.go:81] duration metric: took 9.5808ms for pod "kube-controller-manager-multinode-116700" in "kube-system" namespace to be "Ready" ...
	E0807 20:02:30.095265    1172 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-116700" hosting pod "kube-controller-manager-multinode-116700" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-116700" has status "Ready":"False"
	I0807 20:02:30.095265    1172 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-4lnjd" in "kube-system" namespace to be "Ready" ...
	I0807 20:02:30.229064    1172 request.go:629] Waited for 133.7967ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4lnjd
	I0807 20:02:30.229370    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4lnjd
	I0807 20:02:30.229370    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:30.229370    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:30.229370    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:30.232974    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:02:30.233124    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:30.233247    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:30.233247    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:30.233247    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:30 GMT
	I0807 20:02:30.233247    1172 round_trippers.go:580]     Audit-Id: 2647c25e-d1c6-4c7d-8856-c441cccd69ac
	I0807 20:02:30.233247    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:30.233247    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:30.233761    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-4lnjd","generateName":"kube-proxy-","namespace":"kube-system","uid":"254c1a93-f57b-4997-a3a1-d5f145f7c549","resourceVersion":"1843","creationTimestamp":"2024-08-07T19:46:10Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d82856c0-3330-4ab9-b7bf-54ed48646bce","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:46:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d82856c0-3330-4ab9-b7bf-54ed48646bce\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6067 chars]
	I0807 20:02:30.433980    1172 request.go:629] Waited for 199.2358ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.226.95:8443/api/v1/nodes/multinode-116700-m03
	I0807 20:02:30.434357    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700-m03
	I0807 20:02:30.434357    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:30.434357    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:30.434357    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:30.437745    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:02:30.437745    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:30.437745    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:30.437745    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:30.437745    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:30 GMT
	I0807 20:02:30.437745    1172 round_trippers.go:580]     Audit-Id: 7e8fd42c-f914-4725-b638-2ea5319862ca
	I0807 20:02:30.437745    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:30.437745    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:30.437745    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700-m03","uid":"9ade310d-2eba-4d92-8b38-64ccda5e080c","resourceVersion":"1854","creationTimestamp":"2024-08-07T19:57:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_07T19_57_34_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:57:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4400 chars]
	I0807 20:02:30.438739    1172 pod_ready.go:97] node "multinode-116700-m03" hosting pod "kube-proxy-4lnjd" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-116700-m03" has status "Ready":"Unknown"
	I0807 20:02:30.438739    1172 pod_ready.go:81] duration metric: took 343.4688ms for pod "kube-proxy-4lnjd" in "kube-system" namespace to be "Ready" ...
	E0807 20:02:30.438739    1172 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-116700-m03" hosting pod "kube-proxy-4lnjd" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-116700-m03" has status "Ready":"Unknown"
	I0807 20:02:30.438739    1172 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-fmjt9" in "kube-system" namespace to be "Ready" ...
	I0807 20:02:30.625867    1172 request.go:629] Waited for 187.1258ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fmjt9
	I0807 20:02:30.625998    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fmjt9
	I0807 20:02:30.626163    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:30.626268    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:30.626291    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:30.629748    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:02:30.629748    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:30.629748    1172 round_trippers.go:580]     Audit-Id: 55cd0960-dceb-4483-a2c9-640e04f8c0e2
	I0807 20:02:30.629748    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:30.629748    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:30.629748    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:30.629748    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:30.629748    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:30 GMT
	I0807 20:02:30.629748    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-fmjt9","generateName":"kube-proxy-","namespace":"kube-system","uid":"766df91e-8fd0-457b-8c11-8810059ca4d9","resourceVersion":"1952","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d82856c0-3330-4ab9-b7bf-54ed48646bce","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d82856c0-3330-4ab9-b7bf-54ed48646bce\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6034 chars]
	I0807 20:02:30.836931    1172 request.go:629] Waited for 205.8516ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:30.837221    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:30.837221    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:30.837221    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:30.837221    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:30.840960    1172 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 20:02:30.840983    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:30.840983    1172 round_trippers.go:580]     Audit-Id: 2cf5d906-e99b-4d25-861e-41164e4ce77f
	I0807 20:02:30.840983    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:30.840983    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:30.840983    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:30.840983    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:30.840983    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:30 GMT
	I0807 20:02:30.841154    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"1871","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0807 20:02:30.841726    1172 pod_ready.go:97] node "multinode-116700" hosting pod "kube-proxy-fmjt9" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-116700" has status "Ready":"False"
	I0807 20:02:30.841829    1172 pod_ready.go:81] duration metric: took 403.0851ms for pod "kube-proxy-fmjt9" in "kube-system" namespace to be "Ready" ...
	E0807 20:02:30.841829    1172 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-116700" hosting pod "kube-proxy-fmjt9" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-116700" has status "Ready":"False"
	I0807 20:02:30.841829    1172 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-vcb7n" in "kube-system" namespace to be "Ready" ...
	I0807 20:02:31.023663    1172 request.go:629] Waited for 181.5326ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vcb7n
	I0807 20:02:31.023743    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vcb7n
	I0807 20:02:31.023743    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:31.023743    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:31.023743    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:31.027553    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:02:31.027553    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:31.027553    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:31.027553    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:31.027553    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:31 GMT
	I0807 20:02:31.027553    1172 round_trippers.go:580]     Audit-Id: 2a267fea-3fd9-4a2b-a7a7-306a0837c4a3
	I0807 20:02:31.027553    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:31.027850    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:31.028226    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-vcb7n","generateName":"kube-proxy-","namespace":"kube-system","uid":"d8d87ad6-19cc-45fa-8c9f-1a862fec4e59","resourceVersion":"661","creationTimestamp":"2024-08-07T19:41:07Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d82856c0-3330-4ab9-b7bf-54ed48646bce","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:41:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d82856c0-3330-4ab9-b7bf-54ed48646bce\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5836 chars]
	I0807 20:02:31.226467    1172 request.go:629] Waited for 197.1667ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.226.95:8443/api/v1/nodes/multinode-116700-m02
	I0807 20:02:31.226467    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700-m02
	I0807 20:02:31.226693    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:31.226693    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:31.226693    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:31.229258    1172 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 20:02:31.229258    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:31.229258    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:31.229258    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:31.230002    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:31.230002    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:31.230002    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:31 GMT
	I0807 20:02:31.230002    1172 round_trippers.go:580]     Audit-Id: 73f837a0-697b-4d56-9d5f-01b5f9b9522c
	I0807 20:02:31.230270    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700-m02","uid":"95fa38ae-e99d-47d4-a12c-06eb42d66eb3","resourceVersion":"1754","creationTimestamp":"2024-08-07T19:41:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_07T19_41_08_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:41:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3826 chars]
	I0807 20:02:31.230667    1172 pod_ready.go:92] pod "kube-proxy-vcb7n" in "kube-system" namespace has status "Ready":"True"
	I0807 20:02:31.230667    1172 pod_ready.go:81] duration metric: took 388.833ms for pod "kube-proxy-vcb7n" in "kube-system" namespace to be "Ready" ...
	I0807 20:02:31.230667    1172 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-116700" in "kube-system" namespace to be "Ready" ...
	I0807 20:02:31.428462    1172 request.go:629] Waited for 197.5381ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-116700
	I0807 20:02:31.428704    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-116700
	I0807 20:02:31.428704    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:31.428704    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:31.428704    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:31.432059    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:02:31.432059    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:31.433086    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:31 GMT
	I0807 20:02:31.433086    1172 round_trippers.go:580]     Audit-Id: b0fb5448-79b5-4980-b9f2-51c658e99485
	I0807 20:02:31.433133    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:31.433133    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:31.433133    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:31.433133    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:31.433263    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-116700","namespace":"kube-system","uid":"7b6df7b7-8c94-498a-bc4c-74d72efd572a","resourceVersion":"1913","creationTimestamp":"2024-08-07T19:37:39Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"fde91c95fce6faff219ccfa4b0b2484c","kubernetes.io/config.mirror":"fde91c95fce6faff219ccfa4b0b2484c","kubernetes.io/config.seen":"2024-08-07T19:37:39.552047359Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5444 chars]
	I0807 20:02:31.629942    1172 request.go:629] Waited for 195.8068ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:31.630110    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:31.630110    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:31.630110    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:31.630110    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:31.632681    1172 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 20:02:31.632681    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:31.632681    1172 round_trippers.go:580]     Audit-Id: 2e87a254-c2c0-49ca-89f2-87026a94e4b0
	I0807 20:02:31.632681    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:31.632681    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:31.633331    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:31.633331    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:31.633331    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:31 GMT
	I0807 20:02:31.633898    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"1871","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0807 20:02:31.634527    1172 pod_ready.go:97] node "multinode-116700" hosting pod "kube-scheduler-multinode-116700" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-116700" has status "Ready":"False"
	I0807 20:02:31.634604    1172 pod_ready.go:81] duration metric: took 403.9318ms for pod "kube-scheduler-multinode-116700" in "kube-system" namespace to be "Ready" ...
	E0807 20:02:31.634604    1172 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-116700" hosting pod "kube-scheduler-multinode-116700" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-116700" has status "Ready":"False"
	I0807 20:02:31.634604    1172 pod_ready.go:38] duration metric: took 1.6018476s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0807 20:02:31.634679    1172 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0807 20:02:31.655839    1172 command_runner.go:130] > -16
	I0807 20:02:31.656051    1172 ops.go:34] apiserver oom_adj: -16
	I0807 20:02:31.656051    1172 kubeadm.go:597] duration metric: took 14.2419302s to restartPrimaryControlPlane
	I0807 20:02:31.656131    1172 kubeadm.go:394] duration metric: took 14.3101778s to StartCluster
	I0807 20:02:31.656131    1172 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 20:02:31.656131    1172 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0807 20:02:31.658045    1172 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 20:02:31.659688    1172 start.go:235] Will wait 6m0s for node &{Name: IP:172.28.226.95 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0807 20:02:31.659741    1172 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0807 20:02:31.660168    1172 config.go:182] Loaded profile config "multinode-116700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 20:02:31.663131    1172 out.go:177] * Verifying Kubernetes components...
	I0807 20:02:31.669054    1172 out.go:177] * Enabled addons: 
	I0807 20:02:31.671513    1172 addons.go:510] duration metric: took 11.8246ms for enable addons: enabled=[]
	I0807 20:02:31.677543    1172 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 20:02:31.946334    1172 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0807 20:02:31.972467    1172 node_ready.go:35] waiting up to 6m0s for node "multinode-116700" to be "Ready" ...
	I0807 20:02:31.973468    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:31.973468    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:31.973468    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:31.973468    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:31.977446    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:02:31.977446    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:31.977446    1172 round_trippers.go:580]     Audit-Id: 28062720-7444-428d-bda9-fa6ff9fc87c4
	I0807 20:02:31.977446    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:31.977446    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:31.977446    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:31.977446    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:31.977446    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:31 GMT
	I0807 20:02:31.978694    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"1871","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0807 20:02:32.475181    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:32.475181    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:32.475181    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:32.475181    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:32.478767    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:02:32.478887    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:32.478887    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:32 GMT
	I0807 20:02:32.478887    1172 round_trippers.go:580]     Audit-Id: 7f873afe-44c7-498b-a4b3-24497c382afb
	I0807 20:02:32.478887    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:32.478887    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:32.478887    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:32.478887    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:32.480186    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"1871","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0807 20:02:32.987293    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:32.987293    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:32.987293    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:32.987293    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:32.991054    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:02:32.991356    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:32.991356    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:32.991432    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:33 GMT
	I0807 20:02:32.991432    1172 round_trippers.go:580]     Audit-Id: b79303ae-051c-4d12-bbc0-a9c3f9e0c9d3
	I0807 20:02:32.991432    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:32.991432    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:32.991432    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:32.991432    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"1871","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0807 20:02:33.473992    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:33.474301    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:33.474301    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:33.474414    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:33.481135    1172 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0807 20:02:33.481346    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:33.481346    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:33.481346    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:33 GMT
	I0807 20:02:33.481346    1172 round_trippers.go:580]     Audit-Id: f4214583-7382-43cf-842c-cdde3e9855b6
	I0807 20:02:33.481346    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:33.481346    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:33.481346    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:33.481346    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"1871","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0807 20:02:33.986723    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:33.986723    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:33.986723    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:33.986723    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:33.990363    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:02:33.991196    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:33.991196    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:33.991196    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:33.991196    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:33.991196    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:34 GMT
	I0807 20:02:33.991334    1172 round_trippers.go:580]     Audit-Id: 89a3befa-fbb3-498c-8fb1-972d492df91d
	I0807 20:02:33.991334    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:33.991374    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"1871","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0807 20:02:33.992336    1172 node_ready.go:53] node "multinode-116700" has status "Ready":"False"
	I0807 20:02:34.483881    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:34.483881    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:34.484196    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:34.484196    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:34.488755    1172 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 20:02:34.488755    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:34.488755    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:34.488755    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:34 GMT
	I0807 20:02:34.488755    1172 round_trippers.go:580]     Audit-Id: 3fce2f23-72a5-4813-80a6-252f9d60e6e6
	I0807 20:02:34.488755    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:34.488755    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:34.488755    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:34.488755    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"1871","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0807 20:02:34.982625    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:34.982625    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:34.982625    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:34.982625    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:34.987210    1172 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 20:02:34.987210    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:34.987441    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:35 GMT
	I0807 20:02:34.987441    1172 round_trippers.go:580]     Audit-Id: 7ceaae53-23ce-4143-bc21-5a9eac82a57d
	I0807 20:02:34.987441    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:34.987441    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:34.987441    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:34.987441    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:34.987617    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"1871","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0807 20:02:35.479925    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:35.479925    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:35.479925    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:35.479925    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:35.483462    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:02:35.483462    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:35.483898    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:35 GMT
	I0807 20:02:35.483898    1172 round_trippers.go:580]     Audit-Id: 7503987e-d476-43d5-b0f9-2f7fca1bd815
	I0807 20:02:35.483898    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:35.483898    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:35.483898    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:35.483898    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:35.484004    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"1871","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0807 20:02:35.983558    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:35.983558    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:35.983558    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:35.983558    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:35.989954    1172 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0807 20:02:35.989954    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:35.990019    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:35.990019    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:35.990019    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:35.990019    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:36 GMT
	I0807 20:02:35.990117    1172 round_trippers.go:580]     Audit-Id: 13a683f6-52a5-486c-a1ff-89455457ff43
	I0807 20:02:35.990175    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:35.991199    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"1871","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0807 20:02:36.483247    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:36.483247    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:36.483247    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:36.483335    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:36.487504    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:02:36.487504    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:36.487504    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:36.487563    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:36.487563    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:36.487563    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:36 GMT
	I0807 20:02:36.487563    1172 round_trippers.go:580]     Audit-Id: 891433bb-4097-4d45-984c-3e7ade9e18c6
	I0807 20:02:36.487617    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:36.487617    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"1871","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0807 20:02:36.488682    1172 node_ready.go:53] node "multinode-116700" has status "Ready":"False"
	I0807 20:02:36.980123    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:36.980226    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:36.980226    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:36.980226    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:36.987944    1172 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0807 20:02:36.987944    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:36.987944    1172 round_trippers.go:580]     Audit-Id: 16f70f5d-5d19-47a3-b9d0-213fc2d451ae
	I0807 20:02:36.987944    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:36.987944    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:36.987944    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:36.987944    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:36.987944    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:37 GMT
	I0807 20:02:36.987944    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"1871","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0807 20:02:37.481170    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:37.481333    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:37.481333    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:37.481333    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:37.484261    1172 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 20:02:37.484261    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:37.484261    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:37.485180    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:37.485180    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:37.485180    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:37 GMT
	I0807 20:02:37.485180    1172 round_trippers.go:580]     Audit-Id: bf988547-6486-49a8-892b-0ff96bd013d3
	I0807 20:02:37.485180    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:37.486301    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"1871","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0807 20:02:37.980048    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:37.980126    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:37.980126    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:37.980126    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:37.984487    1172 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 20:02:37.984487    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:37.984487    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:37.984487    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:37.984914    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:37.984914    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:37.984914    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:38 GMT
	I0807 20:02:37.984914    1172 round_trippers.go:580]     Audit-Id: f99f0558-7749-42fb-b424-cfa921f0c8a9
	I0807 20:02:37.985427    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"1871","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0807 20:02:38.479956    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:38.480090    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:38.480173    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:38.480173    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:38.484247    1172 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 20:02:38.484247    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:38.484247    1172 round_trippers.go:580]     Audit-Id: 5472898b-f4d7-4327-b46b-9d7a2000afa9
	I0807 20:02:38.484247    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:38.484247    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:38.484247    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:38.484247    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:38.484761    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:38 GMT
	I0807 20:02:38.485635    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"1871","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0807 20:02:38.982403    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:38.982592    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:38.982592    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:38.982592    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:38.985173    1172 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 20:02:38.985173    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:38.985173    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:38.985173    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:38.985173    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:39 GMT
	I0807 20:02:38.985173    1172 round_trippers.go:580]     Audit-Id: fb19d351-def8-4097-8bfd-21334d986bfa
	I0807 20:02:38.985173    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:38.985173    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:38.985173    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"1871","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0807 20:02:38.986204    1172 node_ready.go:53] node "multinode-116700" has status "Ready":"False"
	I0807 20:02:39.488094    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:39.488094    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:39.488094    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:39.488094    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:39.492479    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:02:39.492565    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:39.492610    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:39.492610    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:39.492610    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:39 GMT
	I0807 20:02:39.492674    1172 round_trippers.go:580]     Audit-Id: 4c0406b4-1053-473e-924f-2a77a8d4d0a8
	I0807 20:02:39.492760    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:39.492760    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:39.493039    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"1987","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0807 20:02:39.974794    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:39.974794    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:39.974794    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:39.974794    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:39.978467    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:02:39.978467    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:39.978813    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:39.978813    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:39.978813    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:39.978813    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:39 GMT
	I0807 20:02:39.978813    1172 round_trippers.go:580]     Audit-Id: 9ae0e219-f98a-4cad-9120-a1c129abb84c
	I0807 20:02:39.978813    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:39.978995    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"1987","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0807 20:02:40.488046    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:40.488107    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:40.488107    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:40.488107    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:40.492007    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:02:40.492435    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:40.492435    1172 round_trippers.go:580]     Audit-Id: 2fea58af-6cbe-4034-9f2c-bf3657b6ed62
	I0807 20:02:40.492435    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:40.492435    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:40.492502    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:40.492502    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:40.492502    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:40 GMT
	I0807 20:02:40.493079    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"1987","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0807 20:02:40.974436    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:40.974548    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:40.974548    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:40.974548    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:40.978109    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:02:40.978109    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:40.979033    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:40 GMT
	I0807 20:02:40.979033    1172 round_trippers.go:580]     Audit-Id: 5ff32c08-4d12-4259-b5f6-eacdbb09df4f
	I0807 20:02:40.979033    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:40.979033    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:40.979033    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:40.979033    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:40.979255    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"1987","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0807 20:02:41.485676    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:41.485676    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:41.485676    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:41.485676    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:41.490338    1172 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 20:02:41.490338    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:41.490695    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:41.490695    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:41.490695    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:41.490695    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:41.490695    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:41 GMT
	I0807 20:02:41.490695    1172 round_trippers.go:580]     Audit-Id: c55403ef-1096-430b-8552-802e6b1358c5
	I0807 20:02:41.490942    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"1987","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0807 20:02:41.491595    1172 node_ready.go:53] node "multinode-116700" has status "Ready":"False"
	I0807 20:02:41.985949    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:41.986015    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:41.986015    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:41.986015    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:41.990436    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:02:41.990436    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:41.990497    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:41.990497    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:41.990497    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:41.990497    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:42 GMT
	I0807 20:02:41.990497    1172 round_trippers.go:580]     Audit-Id: 3fb2d2e1-9010-4c21-a815-776d52b7733c
	I0807 20:02:41.990497    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:41.991410    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"1987","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0807 20:02:42.485798    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:42.485798    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:42.485881    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:42.485881    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:42.488828    1172 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 20:02:42.488828    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:42.488828    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:42.488828    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:42.488828    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:42.488828    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:42 GMT
	I0807 20:02:42.488828    1172 round_trippers.go:580]     Audit-Id: baefad3f-0707-45ae-b242-bd8bb38aa43d
	I0807 20:02:42.488828    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:42.489484    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"1987","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0807 20:02:42.988215    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:42.988215    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:42.988215    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:42.988215    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:42.991834    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:02:42.991834    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:42.991834    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:43 GMT
	I0807 20:02:42.991834    1172 round_trippers.go:580]     Audit-Id: fb7b93fe-f678-49bf-9186-f6df7963b507
	I0807 20:02:42.991834    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:42.991834    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:42.991834    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:42.991834    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:42.992809    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"1987","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0807 20:02:43.472970    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:43.473296    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:43.473296    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:43.473296    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:43.477089    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:02:43.478087    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:43.478087    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:43.478087    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:43.478087    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:43 GMT
	I0807 20:02:43.478087    1172 round_trippers.go:580]     Audit-Id: 9137dcfe-94d1-4fdc-a7dc-e8f213e03b50
	I0807 20:02:43.478087    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:43.478087    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:43.478852    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"1987","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0807 20:02:43.987538    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:43.987619    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:43.987619    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:43.987619    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:43.990901    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:02:43.991828    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:43.991828    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:43.991828    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:44 GMT
	I0807 20:02:43.991828    1172 round_trippers.go:580]     Audit-Id: bb94eb97-33bb-4dfa-a2c1-016dbab3219f
	I0807 20:02:43.991828    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:43.991828    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:43.991828    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:43.992081    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"1987","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0807 20:02:43.992449    1172 node_ready.go:53] node "multinode-116700" has status "Ready":"False"
	I0807 20:02:44.486473    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:44.486473    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:44.486473    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:44.486625    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:44.490457    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:02:44.491475    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:44.491475    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:44.491475    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:44 GMT
	I0807 20:02:44.491475    1172 round_trippers.go:580]     Audit-Id: 83c10ef6-e38e-43c9-90be-166e13e62969
	I0807 20:02:44.491475    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:44.491566    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:44.491566    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:44.492619    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"1987","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0807 20:02:44.985369    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:44.985369    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:44.985795    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:44.985795    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:44.989257    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:02:44.990136    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:44.990136    1172 round_trippers.go:580]     Audit-Id: 61e25bf3-66fc-4383-bc7a-2e7101f62f08
	I0807 20:02:44.990136    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:44.990136    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:44.990136    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:44.990136    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:44.990136    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:45 GMT
	I0807 20:02:44.990450    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"1987","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0807 20:02:45.482904    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:45.483008    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:45.483008    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:45.483008    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:45.489500    1172 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0807 20:02:45.489500    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:45.489500    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:45.489500    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:45.489500    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:45.489500    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:45.489500    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:45 GMT
	I0807 20:02:45.489500    1172 round_trippers.go:580]     Audit-Id: 1b07f12c-f95b-4bad-87f4-3a576b350463
	I0807 20:02:45.490124    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"1987","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0807 20:02:45.982402    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:45.982578    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:45.982578    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:45.982578    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:45.987197    1172 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 20:02:45.987478    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:45.987478    1172 round_trippers.go:580]     Audit-Id: dfdb34cd-3020-402e-a447-ffad979b8f13
	I0807 20:02:45.987478    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:45.987478    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:45.987478    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:45.987478    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:45.987478    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:46 GMT
	I0807 20:02:45.987478    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"1987","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0807 20:02:46.481628    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:46.481722    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:46.481722    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:46.481722    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:46.488351    1172 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0807 20:02:46.488351    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:46.488351    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:46.488351    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:46.488351    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:46.488351    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:46 GMT
	I0807 20:02:46.488351    1172 round_trippers.go:580]     Audit-Id: 58679323-d3ae-4597-8ff0-bc6b11c17152
	I0807 20:02:46.488351    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:46.488351    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"1987","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0807 20:02:46.489257    1172 node_ready.go:53] node "multinode-116700" has status "Ready":"False"
	I0807 20:02:46.979003    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:46.979057    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:46.979057    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:46.979057    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:46.982838    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:02:46.983739    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:46.983739    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:46.983739    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:46.983739    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:46.983739    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:47 GMT
	I0807 20:02:46.983739    1172 round_trippers.go:580]     Audit-Id: 2244b236-b37d-4823-91cf-facd140b9012
	I0807 20:02:46.983739    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:46.983940    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"1987","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0807 20:02:47.475415    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:47.475506    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:47.475506    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:47.475506    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:47.480815    1172 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 20:02:47.481346    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:47.481423    1172 round_trippers.go:580]     Audit-Id: a3e73824-025c-4479-a460-44d386efb72d
	I0807 20:02:47.481423    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:47.481423    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:47.481423    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:47.481423    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:47.481423    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:47 GMT
	I0807 20:02:47.481850    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"2005","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0807 20:02:47.482239    1172 node_ready.go:49] node "multinode-116700" has status "Ready":"True"
	I0807 20:02:47.482239    1172 node_ready.go:38] duration metric: took 15.5095743s for node "multinode-116700" to be "Ready" ...
	I0807 20:02:47.482239    1172 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0807 20:02:47.482239    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods
	I0807 20:02:47.482239    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:47.482239    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:47.482239    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:47.491773    1172 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0807 20:02:47.491773    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:47.491773    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:47.491773    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:47.491773    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:47 GMT
	I0807 20:02:47.491773    1172 round_trippers.go:580]     Audit-Id: 0c5acbaa-b000-4adf-bf7d-1b0b72dc8274
	I0807 20:02:47.491773    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:47.491773    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:47.493552    1172 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"2006"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-7l6v2","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7de73f9c-93d9-46c6-ae10-b253dd257a19","resourceVersion":"1920","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"26f3775d-afda-4408-b967-dc333bdd23fc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26f3775d-afda-4408-b967-dc333bdd23fc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86163 chars]
	I0807 20:02:47.497680    1172 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-7l6v2" in "kube-system" namespace to be "Ready" ...
	I0807 20:02:47.497680    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7l6v2
	I0807 20:02:47.497680    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:47.497680    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:47.497680    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:47.500355    1172 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 20:02:47.500355    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:47.500355    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:47.500637    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:47.500637    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:47 GMT
	I0807 20:02:47.500637    1172 round_trippers.go:580]     Audit-Id: 5e61182d-f183-4f4e-a5f4-4d60c4bc7da8
	I0807 20:02:47.500637    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:47.500637    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:47.500956    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7l6v2","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7de73f9c-93d9-46c6-ae10-b253dd257a19","resourceVersion":"1920","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"26f3775d-afda-4408-b967-dc333bdd23fc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26f3775d-afda-4408-b967-dc333bdd23fc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0807 20:02:47.501218    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:47.501218    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:47.501218    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:47.501218    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:47.504426    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:02:47.504723    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:47.504723    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:47.504723    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:47.504723    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:47 GMT
	I0807 20:02:47.504723    1172 round_trippers.go:580]     Audit-Id: 93bbe4f5-4c0f-4faf-a919-44179b91a49f
	I0807 20:02:47.504723    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:47.504723    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:47.505103    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"2005","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0807 20:02:48.002331    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7l6v2
	I0807 20:02:48.002331    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:48.002331    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:48.002331    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:48.006680    1172 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 20:02:48.007403    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:48.007403    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:48.007403    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:48.007403    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:48 GMT
	I0807 20:02:48.007403    1172 round_trippers.go:580]     Audit-Id: da2ddd1d-5874-4b52-9140-bd62fc152298
	I0807 20:02:48.007403    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:48.007403    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:48.007672    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7l6v2","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7de73f9c-93d9-46c6-ae10-b253dd257a19","resourceVersion":"1920","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"26f3775d-afda-4408-b967-dc333bdd23fc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26f3775d-afda-4408-b967-dc333bdd23fc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0807 20:02:48.008372    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:48.008372    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:48.008372    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:48.008372    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:48.013670    1172 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 20:02:48.013670    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:48.013670    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:48.013670    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:48.013670    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:48 GMT
	I0807 20:02:48.013670    1172 round_trippers.go:580]     Audit-Id: 7c24c2c0-f664-4f82-8573-be820f144457
	I0807 20:02:48.013670    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:48.013670    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:48.014362    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"2005","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0807 20:02:48.503844    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7l6v2
	I0807 20:02:48.503844    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:48.503844    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:48.503844    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:48.508442    1172 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 20:02:48.508442    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:48.508442    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:48.508442    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:48.508442    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:48.508442    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:48.508442    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:48 GMT
	I0807 20:02:48.508442    1172 round_trippers.go:580]     Audit-Id: 57ad2480-4f29-4049-89ec-e2d8d906d4de
	I0807 20:02:48.509754    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7l6v2","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7de73f9c-93d9-46c6-ae10-b253dd257a19","resourceVersion":"1920","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"26f3775d-afda-4408-b967-dc333bdd23fc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26f3775d-afda-4408-b967-dc333bdd23fc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0807 20:02:48.510868    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:48.510868    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:48.510868    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:48.510938    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:48.513064    1172 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 20:02:48.514056    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:48.514056    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:48.514056    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:48.514056    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:48 GMT
	I0807 20:02:48.514056    1172 round_trippers.go:580]     Audit-Id: c16c9408-aab1-4070-805c-61a05c35ee3b
	I0807 20:02:48.514056    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:48.514138    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:48.514376    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"2005","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0807 20:02:49.001170    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7l6v2
	I0807 20:02:49.001246    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:49.001246    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:49.001246    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:49.004571    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:02:49.005444    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:49.005444    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:49.005444    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:49.005444    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:49.005444    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:49.005444    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:49 GMT
	I0807 20:02:49.005444    1172 round_trippers.go:580]     Audit-Id: 7ea24022-e900-45c8-a69b-f490dc00ac9c
	I0807 20:02:49.005444    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7l6v2","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7de73f9c-93d9-46c6-ae10-b253dd257a19","resourceVersion":"1920","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"26f3775d-afda-4408-b967-dc333bdd23fc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26f3775d-afda-4408-b967-dc333bdd23fc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0807 20:02:49.006426    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:49.006426    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:49.006495    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:49.006495    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:49.009854    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:02:49.009996    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:49.009996    1172 round_trippers.go:580]     Audit-Id: 6b02ea79-4576-4568-8658-66daa564c002
	I0807 20:02:49.009996    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:49.009996    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:49.009996    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:49.009996    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:49.009996    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:49 GMT
	I0807 20:02:49.010300    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"2005","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0807 20:02:49.502735    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7l6v2
	I0807 20:02:49.502735    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:49.502735    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:49.502735    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:49.507382    1172 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 20:02:49.507599    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:49.507599    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:49.507599    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:49.507599    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:49.507599    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:49.507599    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:49 GMT
	I0807 20:02:49.507599    1172 round_trippers.go:580]     Audit-Id: 2b3bb48d-eab6-444e-9c0b-95d9a5e868cd
	I0807 20:02:49.507998    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7l6v2","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7de73f9c-93d9-46c6-ae10-b253dd257a19","resourceVersion":"1920","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"26f3775d-afda-4408-b967-dc333bdd23fc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26f3775d-afda-4408-b967-dc333bdd23fc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0807 20:02:49.508675    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:49.508675    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:49.508675    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:49.508675    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:49.512462    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:02:49.512591    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:49.512591    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:49.512591    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:49.512591    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:49.512591    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:49 GMT
	I0807 20:02:49.512591    1172 round_trippers.go:580]     Audit-Id: 6b14fa3a-28f7-45cb-af03-50a0840dfd16
	I0807 20:02:49.512591    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:49.513099    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"2010","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0807 20:02:49.513646    1172 pod_ready.go:102] pod "coredns-7db6d8ff4d-7l6v2" in "kube-system" namespace has status "Ready":"False"
	I0807 20:02:50.000384    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7l6v2
	I0807 20:02:50.000454    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:50.000454    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:50.000454    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:50.004772    1172 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 20:02:50.005311    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:50.005311    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:50.005311    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:50 GMT
	I0807 20:02:50.005311    1172 round_trippers.go:580]     Audit-Id: 95572ba9-7464-49c4-a689-18e57ba8cefc
	I0807 20:02:50.005311    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:50.005311    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:50.005311    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:50.005714    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7l6v2","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7de73f9c-93d9-46c6-ae10-b253dd257a19","resourceVersion":"1920","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"26f3775d-afda-4408-b967-dc333bdd23fc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26f3775d-afda-4408-b967-dc333bdd23fc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0807 20:02:50.007076    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:50.007076    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:50.007114    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:50.007114    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:50.014435    1172 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0807 20:02:50.014540    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:50.014540    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:50.014540    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:50.014540    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:50.014602    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:50 GMT
	I0807 20:02:50.014602    1172 round_trippers.go:580]     Audit-Id: 59a2a45a-ef3b-4f89-8c92-f641a597dd36
	I0807 20:02:50.014627    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:50.014627    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"2010","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0807 20:02:50.513192    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7l6v2
	I0807 20:02:50.513192    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:50.513192    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:50.513192    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:50.517124    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:02:50.517124    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:50.517124    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:50.517124    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:50.517124    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:50 GMT
	I0807 20:02:50.517124    1172 round_trippers.go:580]     Audit-Id: 1d958595-0b12-4811-9877-94900d46f196
	I0807 20:02:50.517124    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:50.517124    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:50.518163    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7l6v2","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7de73f9c-93d9-46c6-ae10-b253dd257a19","resourceVersion":"1920","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"26f3775d-afda-4408-b967-dc333bdd23fc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26f3775d-afda-4408-b967-dc333bdd23fc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0807 20:02:50.519162    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:50.519162    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:50.519162    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:50.519162    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:50.522151    1172 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 20:02:50.522151    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:50.522151    1172 round_trippers.go:580]     Audit-Id: dc19d8ce-6ff4-47f9-bd1f-d8af53d60dd2
	I0807 20:02:50.522278    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:50.522278    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:50.522278    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:50.522278    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:50.522278    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:50 GMT
	I0807 20:02:50.522736    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"2010","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0807 20:02:50.999686    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7l6v2
	I0807 20:02:50.999745    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:50.999745    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:50.999745    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:51.007056    1172 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0807 20:02:51.007169    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:51.007169    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:51.007228    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:51 GMT
	I0807 20:02:51.007228    1172 round_trippers.go:580]     Audit-Id: e3625428-9c77-4943-897c-8292e1205ce2
	I0807 20:02:51.007228    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:51.007257    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:51.007257    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:51.007257    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7l6v2","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7de73f9c-93d9-46c6-ae10-b253dd257a19","resourceVersion":"1920","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"26f3775d-afda-4408-b967-dc333bdd23fc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26f3775d-afda-4408-b967-dc333bdd23fc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0807 20:02:51.008124    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:51.008124    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:51.008124    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:51.008124    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:51.010471    1172 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 20:02:51.011496    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:51.011496    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:51.011496    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:51.011496    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:51 GMT
	I0807 20:02:51.011496    1172 round_trippers.go:580]     Audit-Id: d0830fbb-7cc2-47cb-b344-1350030b4d7d
	I0807 20:02:51.011496    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:51.011496    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:51.011496    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"2010","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0807 20:02:51.499524    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7l6v2
	I0807 20:02:51.499726    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:51.499726    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:51.499795    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:51.504333    1172 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 20:02:51.504333    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:51.504333    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:51 GMT
	I0807 20:02:51.504778    1172 round_trippers.go:580]     Audit-Id: 71998f35-385c-43de-86f9-3cc7b8bc4baf
	I0807 20:02:51.504778    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:51.504778    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:51.504778    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:51.504778    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:51.505202    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7l6v2","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7de73f9c-93d9-46c6-ae10-b253dd257a19","resourceVersion":"1920","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"26f3775d-afda-4408-b967-dc333bdd23fc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26f3775d-afda-4408-b967-dc333bdd23fc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0807 20:02:51.506035    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:51.506101    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:51.506101    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:51.506101    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:51.508301    1172 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 20:02:51.508301    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:51.508301    1172 round_trippers.go:580]     Audit-Id: 50a601e2-d68b-495a-aecd-4f40f5d35dd4
	I0807 20:02:51.508301    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:51.508301    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:51.508301    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:51.508301    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:51.508301    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:51 GMT
	I0807 20:02:51.508790    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"2010","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0807 20:02:52.012592    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7l6v2
	I0807 20:02:52.012592    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:52.012733    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:52.012733    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:52.015520    1172 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 20:02:52.016543    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:52.016543    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:52.016543    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:52.016543    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:52.016543    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:52.016543    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:52 GMT
	I0807 20:02:52.016543    1172 round_trippers.go:580]     Audit-Id: ce3b1b8b-150e-4dca-b9cd-2ab4c411b1b5
	I0807 20:02:52.016774    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7l6v2","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7de73f9c-93d9-46c6-ae10-b253dd257a19","resourceVersion":"1920","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"26f3775d-afda-4408-b967-dc333bdd23fc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26f3775d-afda-4408-b967-dc333bdd23fc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0807 20:02:52.017843    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:52.017906    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:52.017906    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:52.017906    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:52.021116    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:02:52.021116    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:52.021116    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:52.021116    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:52.021116    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:52.021116    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:52.021116    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:52 GMT
	I0807 20:02:52.021116    1172 round_trippers.go:580]     Audit-Id: 24819833-a381-4585-957b-0fdb395d0949
	I0807 20:02:52.021116    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"2010","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0807 20:02:52.022082    1172 pod_ready.go:102] pod "coredns-7db6d8ff4d-7l6v2" in "kube-system" namespace has status "Ready":"False"
	I0807 20:02:52.500560    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7l6v2
	I0807 20:02:52.500560    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:52.500560    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:52.500560    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:52.505368    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:02:52.505368    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:52.505368    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:52 GMT
	I0807 20:02:52.505368    1172 round_trippers.go:580]     Audit-Id: 4f849644-05ad-472b-84a7-4b3b6c07e57e
	I0807 20:02:52.505368    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:52.505368    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:52.505368    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:52.505368    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:52.505706    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7l6v2","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7de73f9c-93d9-46c6-ae10-b253dd257a19","resourceVersion":"1920","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"26f3775d-afda-4408-b967-dc333bdd23fc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26f3775d-afda-4408-b967-dc333bdd23fc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0807 20:02:52.506653    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:52.506653    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:52.506653    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:52.506653    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:52.510287    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:02:52.510287    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:52.510287    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:52.510287    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:52.510287    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:52.510287    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:52 GMT
	I0807 20:02:52.510287    1172 round_trippers.go:580]     Audit-Id: 2a14043f-7e43-44f1-bff1-2a6a371b9028
	I0807 20:02:52.510287    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:52.511139    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"2010","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0807 20:02:53.001439    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7l6v2
	I0807 20:02:53.001439    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:53.001688    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:53.001688    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:53.006302    1172 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 20:02:53.006855    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:53.006924    1172 round_trippers.go:580]     Audit-Id: 7e1c77eb-e021-4678-a1e8-d07012b5bde0
	I0807 20:02:53.006968    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:53.006968    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:53.007050    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:53.007050    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:53.007050    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:53 GMT
	I0807 20:02:53.008209    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7l6v2","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7de73f9c-93d9-46c6-ae10-b253dd257a19","resourceVersion":"1920","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"26f3775d-afda-4408-b967-dc333bdd23fc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26f3775d-afda-4408-b967-dc333bdd23fc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0807 20:02:53.008858    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:53.008858    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:53.008858    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:53.008858    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:53.014031    1172 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 20:02:53.014031    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:53.014031    1172 round_trippers.go:580]     Audit-Id: b43739a0-4a0c-4e9f-89be-3b28bd826ad8
	I0807 20:02:53.014031    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:53.014031    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:53.014031    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:53.014031    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:53.014031    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:53 GMT
	I0807 20:02:53.014860    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"2010","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0807 20:02:53.501743    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7l6v2
	I0807 20:02:53.501743    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:53.501868    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:53.501868    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:53.505202    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:02:53.505202    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:53.505202    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:53.505202    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:53.505202    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:53.505202    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:53 GMT
	I0807 20:02:53.505202    1172 round_trippers.go:580]     Audit-Id: 74e35494-5f03-476d-a122-f8167c96d2ed
	I0807 20:02:53.505202    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:53.506481    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7l6v2","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7de73f9c-93d9-46c6-ae10-b253dd257a19","resourceVersion":"1920","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"26f3775d-afda-4408-b967-dc333bdd23fc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26f3775d-afda-4408-b967-dc333bdd23fc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0807 20:02:53.507315    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:53.507381    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:53.507381    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:53.507381    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:53.510645    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:02:53.510845    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:53.510973    1172 round_trippers.go:580]     Audit-Id: 10739d98-48fd-4e73-b460-8027706ecff8
	I0807 20:02:53.510973    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:53.510973    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:53.510973    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:53.510973    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:53.510973    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:53 GMT
	I0807 20:02:53.511214    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"2010","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0807 20:02:54.001411    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7l6v2
	I0807 20:02:54.001411    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:54.001512    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:54.001512    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:54.005959    1172 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 20:02:54.006527    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:54.006527    1172 round_trippers.go:580]     Audit-Id: 996b2645-bb48-4b0a-999e-4f29313ff17d
	I0807 20:02:54.006527    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:54.006527    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:54.006527    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:54.006527    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:54.006527    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:54 GMT
	I0807 20:02:54.006710    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7l6v2","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7de73f9c-93d9-46c6-ae10-b253dd257a19","resourceVersion":"1920","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"26f3775d-afda-4408-b967-dc333bdd23fc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26f3775d-afda-4408-b967-dc333bdd23fc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0807 20:02:54.007761    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:54.007761    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:54.007761    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:54.007814    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:54.009540    1172 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0807 20:02:54.010553    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:54.010553    1172 round_trippers.go:580]     Audit-Id: a5de278f-c66e-4c90-98cf-1ae670caa8ad
	I0807 20:02:54.010553    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:54.010553    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:54.010624    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:54.010624    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:54.010624    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:54 GMT
	I0807 20:02:54.010845    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"2010","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0807 20:02:54.500449    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7l6v2
	I0807 20:02:54.500449    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:54.500449    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:54.500449    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:54.507714    1172 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0807 20:02:54.507797    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:54.507797    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:54.507797    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:54.507944    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:54.507944    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:54 GMT
	I0807 20:02:54.507944    1172 round_trippers.go:580]     Audit-Id: ca3c47f3-5ecc-4501-a05a-ed7e8fbf427f
	I0807 20:02:54.507944    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:54.507985    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7l6v2","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7de73f9c-93d9-46c6-ae10-b253dd257a19","resourceVersion":"1920","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"26f3775d-afda-4408-b967-dc333bdd23fc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26f3775d-afda-4408-b967-dc333bdd23fc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0807 20:02:54.508892    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:54.508892    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:54.508942    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:54.508942    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:54.511128    1172 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 20:02:54.511512    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:54.511512    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:54.511512    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:54.511512    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:54 GMT
	I0807 20:02:54.511512    1172 round_trippers.go:580]     Audit-Id: 2c932c84-4dca-4dfc-8549-03d5c2a83397
	I0807 20:02:54.511512    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:54.511512    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:54.511605    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"2010","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0807 20:02:54.511605    1172 pod_ready.go:102] pod "coredns-7db6d8ff4d-7l6v2" in "kube-system" namespace has status "Ready":"False"
	I0807 20:02:55.001627    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7l6v2
	I0807 20:02:55.001627    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:55.001627    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:55.001627    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:55.006970    1172 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 20:02:55.007042    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:55.007042    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:55.007134    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:55.007134    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:55.007134    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:55 GMT
	I0807 20:02:55.007134    1172 round_trippers.go:580]     Audit-Id: ff67b1af-3a9b-4fea-b317-060feb112841
	I0807 20:02:55.007134    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:55.007463    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7l6v2","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7de73f9c-93d9-46c6-ae10-b253dd257a19","resourceVersion":"1920","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"26f3775d-afda-4408-b967-dc333bdd23fc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26f3775d-afda-4408-b967-dc333bdd23fc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0807 20:02:55.008224    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:55.008224    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:55.008345    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:55.008345    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:55.011645    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:02:55.011645    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:55.011645    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:55.011645    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:55.011645    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:55.011645    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:55 GMT
	I0807 20:02:55.011645    1172 round_trippers.go:580]     Audit-Id: f1b58de9-2b8a-4077-91e3-bfb599ba2a87
	I0807 20:02:55.011645    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:55.011645    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"2010","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0807 20:02:55.500757    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7l6v2
	I0807 20:02:55.500757    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:55.500757    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:55.500757    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:55.505292    1172 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 20:02:55.505365    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:55.505365    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:55.505365    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:55.505365    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:55.505464    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:55.505464    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:55 GMT
	I0807 20:02:55.505464    1172 round_trippers.go:580]     Audit-Id: 1b4ff5a8-e965-4852-9241-756420014182
	I0807 20:02:55.506180    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7l6v2","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7de73f9c-93d9-46c6-ae10-b253dd257a19","resourceVersion":"1920","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"26f3775d-afda-4408-b967-dc333bdd23fc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26f3775d-afda-4408-b967-dc333bdd23fc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0807 20:02:55.506977    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:55.507051    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:55.507051    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:55.507051    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:55.512046    1172 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 20:02:55.512046    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:55.512046    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:55 GMT
	I0807 20:02:55.512046    1172 round_trippers.go:580]     Audit-Id: 475d8975-fc07-4356-91fe-93376834f2c3
	I0807 20:02:55.512046    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:55.512046    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:55.512046    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:55.512046    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:55.512046    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"2010","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0807 20:02:55.999308    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7l6v2
	I0807 20:02:55.999308    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:55.999308    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:55.999308    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:56.002984    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:02:56.002984    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:56.003839    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:56 GMT
	I0807 20:02:56.003839    1172 round_trippers.go:580]     Audit-Id: cb71f139-3006-4907-b5b0-cc6660b63193
	I0807 20:02:56.003839    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:56.003839    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:56.003839    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:56.003839    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:56.004765    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7l6v2","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7de73f9c-93d9-46c6-ae10-b253dd257a19","resourceVersion":"1920","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"26f3775d-afda-4408-b967-dc333bdd23fc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26f3775d-afda-4408-b967-dc333bdd23fc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0807 20:02:56.005675    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:56.005675    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:56.005782    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:56.005782    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:56.010216    1172 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 20:02:56.010238    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:56.010238    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:56 GMT
	I0807 20:02:56.010238    1172 round_trippers.go:580]     Audit-Id: 5d9a95bb-ccea-4dd3-9584-e899b2ee0df8
	I0807 20:02:56.010238    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:56.010238    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:56.010311    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:56.010311    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:56.011344    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"2010","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0807 20:02:56.514629    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7l6v2
	I0807 20:02:56.514684    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:56.514730    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:56.514730    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:56.519263    1172 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 20:02:56.519263    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:56.519263    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:56.519263    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:56 GMT
	I0807 20:02:56.519263    1172 round_trippers.go:580]     Audit-Id: df9eb2a6-4918-4f63-95c5-358f95169b8f
	I0807 20:02:56.519811    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:56.519811    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:56.519811    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:56.520176    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7l6v2","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7de73f9c-93d9-46c6-ae10-b253dd257a19","resourceVersion":"1920","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"26f3775d-afda-4408-b967-dc333bdd23fc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26f3775d-afda-4408-b967-dc333bdd23fc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0807 20:02:56.520606    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:56.520606    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:56.521137    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:56.521137    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:56.522833    1172 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0807 20:02:56.522833    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:56.522833    1172 round_trippers.go:580]     Audit-Id: 86c00543-9d8a-4f77-a02c-dfdec503fdde
	I0807 20:02:56.522833    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:56.522833    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:56.522833    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:56.522833    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:56.523834    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:56 GMT
	I0807 20:02:56.524156    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"2010","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0807 20:02:56.524835    1172 pod_ready.go:102] pod "coredns-7db6d8ff4d-7l6v2" in "kube-system" namespace has status "Ready":"False"
	I0807 20:02:57.012663    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7l6v2
	I0807 20:02:57.012842    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:57.012842    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:57.012842    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:57.017000    1172 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 20:02:57.017000    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:57.017000    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:57 GMT
	I0807 20:02:57.017000    1172 round_trippers.go:580]     Audit-Id: 59d5668c-eb36-49b2-9698-697db15ea1ff
	I0807 20:02:57.017000    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:57.017000    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:57.017000    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:57.017000    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:57.020655    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7l6v2","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7de73f9c-93d9-46c6-ae10-b253dd257a19","resourceVersion":"1920","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"26f3775d-afda-4408-b967-dc333bdd23fc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26f3775d-afda-4408-b967-dc333bdd23fc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0807 20:02:57.021446    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:57.021446    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:57.021560    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:57.021560    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:57.023849    1172 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 20:02:57.023849    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:57.023849    1172 round_trippers.go:580]     Audit-Id: 3427f1aa-7aaa-4cc7-bb2c-128d6b4871a2
	I0807 20:02:57.023849    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:57.023849    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:57.023849    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:57.024235    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:57.024235    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:57 GMT
	I0807 20:02:57.024314    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"2010","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0807 20:02:57.499241    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7l6v2
	I0807 20:02:57.499360    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:57.499360    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:57.499360    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:57.503200    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:02:57.503200    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:57.503566    1172 round_trippers.go:580]     Audit-Id: 10ef5145-48e9-495c-909a-06334b149822
	I0807 20:02:57.503566    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:57.503566    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:57.503566    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:57.503566    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:57.503566    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:57 GMT
	I0807 20:02:57.503761    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7l6v2","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7de73f9c-93d9-46c6-ae10-b253dd257a19","resourceVersion":"1920","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"26f3775d-afda-4408-b967-dc333bdd23fc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26f3775d-afda-4408-b967-dc333bdd23fc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0807 20:02:57.504516    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:57.504516    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:57.504516    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:57.504516    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:57.507421    1172 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 20:02:57.507421    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:57.507421    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:57.507421    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:57.507421    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:57.507421    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:57 GMT
	I0807 20:02:57.507421    1172 round_trippers.go:580]     Audit-Id: eabb99dc-bc8e-42f6-9d49-686628ff47e8
	I0807 20:02:57.507421    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:57.507421    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"2010","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0807 20:02:57.999838    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7l6v2
	I0807 20:02:57.999914    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:57.999914    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:57.999914    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:58.008837    1172 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0807 20:02:58.008837    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:58.008837    1172 round_trippers.go:580]     Audit-Id: 44bc3223-b83f-4a44-a1ae-c4641b9c0452
	I0807 20:02:58.008837    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:58.008837    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:58.008837    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:58.008837    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:58.008837    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:58 GMT
	I0807 20:02:58.008837    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7l6v2","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7de73f9c-93d9-46c6-ae10-b253dd257a19","resourceVersion":"1920","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"26f3775d-afda-4408-b967-dc333bdd23fc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26f3775d-afda-4408-b967-dc333bdd23fc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0807 20:02:58.009675    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:58.009675    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:58.009675    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:58.009675    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:58.012309    1172 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 20:02:58.013392    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:58.013392    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:58.013392    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:58.013428    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:58 GMT
	I0807 20:02:58.013428    1172 round_trippers.go:580]     Audit-Id: 52ae4a43-1208-4820-8f64-eca2ee7e288d
	I0807 20:02:58.013428    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:58.013428    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:58.013648    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"2010","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0807 20:02:58.499327    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7l6v2
	I0807 20:02:58.499327    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:58.499327    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:58.499327    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:58.504915    1172 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 20:02:58.504915    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:58.504915    1172 round_trippers.go:580]     Audit-Id: b4f4dfcb-300b-4c32-a5af-d5720b7ad022
	I0807 20:02:58.504915    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:58.505144    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:58.505144    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:58.505144    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:58.505144    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:58 GMT
	I0807 20:02:58.505331    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7l6v2","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7de73f9c-93d9-46c6-ae10-b253dd257a19","resourceVersion":"1920","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"26f3775d-afda-4408-b967-dc333bdd23fc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26f3775d-afda-4408-b967-dc333bdd23fc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0807 20:02:58.506140    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:58.506254    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:58.506254    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:58.506254    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:58.509185    1172 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 20:02:58.509185    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:58.509185    1172 round_trippers.go:580]     Audit-Id: ad60ac05-592b-4aef-a5a7-10940feac597
	I0807 20:02:58.509185    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:58.509350    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:58.509350    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:58.509350    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:58.509350    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:58 GMT
	I0807 20:02:58.509693    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"2010","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0807 20:02:58.999631    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7l6v2
	I0807 20:02:58.999869    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:58.999998    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:58.999998    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:59.004560    1172 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 20:02:59.004560    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:59.004560    1172 round_trippers.go:580]     Audit-Id: 496115ba-382f-47f1-9ff2-7d902a5991d7
	I0807 20:02:59.005012    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:59.005012    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:59.005071    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:59.005071    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:59.005114    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:59 GMT
	I0807 20:02:59.005320    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7l6v2","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7de73f9c-93d9-46c6-ae10-b253dd257a19","resourceVersion":"1920","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"26f3775d-afda-4408-b967-dc333bdd23fc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26f3775d-afda-4408-b967-dc333bdd23fc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0807 20:02:59.006121    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:59.006149    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:59.006149    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:59.006149    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:59.009143    1172 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 20:02:59.009143    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:59.009143    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:59.009143    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:59.009143    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:59 GMT
	I0807 20:02:59.009143    1172 round_trippers.go:580]     Audit-Id: 39e19874-a5d9-4fb9-bb04-90e4f8dd41fc
	I0807 20:02:59.009143    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:59.009143    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:59.009831    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"2010","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0807 20:02:59.010353    1172 pod_ready.go:102] pod "coredns-7db6d8ff4d-7l6v2" in "kube-system" namespace has status "Ready":"False"
	I0807 20:02:59.500331    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7l6v2
	I0807 20:02:59.500400    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:59.500400    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:59.500400    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:59.506464    1172 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0807 20:02:59.506464    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:59.506464    1172 round_trippers.go:580]     Audit-Id: 3ab00976-8899-4485-aadd-7d5de388497f
	I0807 20:02:59.506464    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:59.506464    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:59.506464    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:59.506464    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:59.506464    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:59 GMT
	I0807 20:02:59.507362    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7l6v2","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7de73f9c-93d9-46c6-ae10-b253dd257a19","resourceVersion":"1920","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"26f3775d-afda-4408-b967-dc333bdd23fc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26f3775d-afda-4408-b967-dc333bdd23fc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0807 20:02:59.507362    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:59.507362    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:59.507362    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:59.507362    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:59.511393    1172 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 20:02:59.511393    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:59.511393    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:59.511393    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:59.511393    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:59 GMT
	I0807 20:02:59.511393    1172 round_trippers.go:580]     Audit-Id: cc2940a5-8c80-4e39-8a58-98ef620053ea
	I0807 20:02:59.511393    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:59.511393    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:59.512383    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"2010","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0807 20:03:00.008214    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7l6v2
	I0807 20:03:00.008530    1172 round_trippers.go:469] Request Headers:
	I0807 20:03:00.008616    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:03:00.008616    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:03:00.013096    1172 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 20:03:00.013096    1172 round_trippers.go:577] Response Headers:
	I0807 20:03:00.013096    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:03:00 GMT
	I0807 20:03:00.013096    1172 round_trippers.go:580]     Audit-Id: b4d991b9-4d31-41fc-83a0-0204593a7401
	I0807 20:03:00.013096    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:03:00.013096    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:03:00.013096    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:03:00.013096    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:03:00.013096    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7l6v2","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7de73f9c-93d9-46c6-ae10-b253dd257a19","resourceVersion":"1920","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"26f3775d-afda-4408-b967-dc333bdd23fc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26f3775d-afda-4408-b967-dc333bdd23fc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0807 20:03:00.014089    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:03:00.014089    1172 round_trippers.go:469] Request Headers:
	I0807 20:03:00.014089    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:03:00.014089    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:03:00.020100    1172 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0807 20:03:00.020190    1172 round_trippers.go:577] Response Headers:
	I0807 20:03:00.020190    1172 round_trippers.go:580]     Audit-Id: 2faffb69-bd3f-4f2f-9f5a-2984a8099eb6
	I0807 20:03:00.020190    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:03:00.020190    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:03:00.020190    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:03:00.020190    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:03:00.020257    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:03:00 GMT
	I0807 20:03:00.020257    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"2010","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0807 20:03:00.498974    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7l6v2
	I0807 20:03:00.498974    1172 round_trippers.go:469] Request Headers:
	I0807 20:03:00.498974    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:03:00.498974    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:03:00.507990    1172 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0807 20:03:00.508302    1172 round_trippers.go:577] Response Headers:
	I0807 20:03:00.508302    1172 round_trippers.go:580]     Audit-Id: 5dc31837-79cc-48c5-9a06-455e6ef855cc
	I0807 20:03:00.508302    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:03:00.508302    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:03:00.508302    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:03:00.508302    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:03:00.508302    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:03:00 GMT
	I0807 20:03:00.508569    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7l6v2","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7de73f9c-93d9-46c6-ae10-b253dd257a19","resourceVersion":"2028","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"26f3775d-afda-4408-b967-dc333bdd23fc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26f3775d-afda-4408-b967-dc333bdd23fc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7017 chars]
	I0807 20:03:00.509839    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:03:00.509917    1172 round_trippers.go:469] Request Headers:
	I0807 20:03:00.509917    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:03:00.509917    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:03:00.518265    1172 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0807 20:03:00.518265    1172 round_trippers.go:577] Response Headers:
	I0807 20:03:00.518265    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:03:00 GMT
	I0807 20:03:00.518265    1172 round_trippers.go:580]     Audit-Id: 27f742d6-064a-4039-8044-1a09157d974f
	I0807 20:03:00.518265    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:03:00.518265    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:03:00.518265    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:03:00.518265    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:03:00.518265    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"2010","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0807 20:03:00.999824    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7l6v2
	I0807 20:03:01.000093    1172 round_trippers.go:469] Request Headers:
	I0807 20:03:01.000093    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:03:01.000093    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:03:01.004567    1172 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 20:03:01.004567    1172 round_trippers.go:577] Response Headers:
	I0807 20:03:01.004567    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:03:01.004567    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:03:01.004567    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:03:01.004567    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:03:01.004567    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:03:01 GMT
	I0807 20:03:01.004567    1172 round_trippers.go:580]     Audit-Id: 114929a0-f56b-4589-b6ab-6cc30437105d
	I0807 20:03:01.004567    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7l6v2","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7de73f9c-93d9-46c6-ae10-b253dd257a19","resourceVersion":"2028","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"26f3775d-afda-4408-b967-dc333bdd23fc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26f3775d-afda-4408-b967-dc333bdd23fc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7017 chars]
	I0807 20:03:01.005603    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:03:01.005603    1172 round_trippers.go:469] Request Headers:
	I0807 20:03:01.005603    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:03:01.005603    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:03:01.007919    1172 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 20:03:01.007919    1172 round_trippers.go:577] Response Headers:
	I0807 20:03:01.007919    1172 round_trippers.go:580]     Audit-Id: 317cfa20-7f59-4a15-a483-36136dce8fd0
	I0807 20:03:01.007919    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:03:01.007919    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:03:01.007919    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:03:01.007919    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:03:01.008525    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:03:01 GMT
	I0807 20:03:01.008782    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"2010","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0807 20:03:01.501149    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7l6v2
	I0807 20:03:01.501206    1172 round_trippers.go:469] Request Headers:
	I0807 20:03:01.501206    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:03:01.501264    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:03:01.505599    1172 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 20:03:01.505879    1172 round_trippers.go:577] Response Headers:
	I0807 20:03:01.505879    1172 round_trippers.go:580]     Audit-Id: 8c9e0de0-89f4-4e83-ad2b-e5faa8a96887
	I0807 20:03:01.505936    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:03:01.505936    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:03:01.505936    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:03:01.505936    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:03:01.505936    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:03:01 GMT
	I0807 20:03:01.505936    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7l6v2","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7de73f9c-93d9-46c6-ae10-b253dd257a19","resourceVersion":"2028","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"26f3775d-afda-4408-b967-dc333bdd23fc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26f3775d-afda-4408-b967-dc333bdd23fc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7017 chars]
	I0807 20:03:01.507525    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:03:01.507525    1172 round_trippers.go:469] Request Headers:
	I0807 20:03:01.507525    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:03:01.507525    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:03:01.513430    1172 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 20:03:01.513636    1172 round_trippers.go:577] Response Headers:
	I0807 20:03:01.513636    1172 round_trippers.go:580]     Audit-Id: 5b8dfcea-550e-4984-ad4d-903009f40f5b
	I0807 20:03:01.513636    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:03:01.513636    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:03:01.513636    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:03:01.513636    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:03:01.513636    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:03:01 GMT
	I0807 20:03:01.514168    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"2010","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0807 20:03:01.514383    1172 pod_ready.go:102] pod "coredns-7db6d8ff4d-7l6v2" in "kube-system" namespace has status "Ready":"False"
	I0807 20:03:02.005113    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7l6v2
	I0807 20:03:02.005113    1172 round_trippers.go:469] Request Headers:
	I0807 20:03:02.005113    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:03:02.005113    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:03:02.008733    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:03:02.008733    1172 round_trippers.go:577] Response Headers:
	I0807 20:03:02.009483    1172 round_trippers.go:580]     Audit-Id: 3cb274e2-d5c3-4ad3-bdae-daee9175a420
	I0807 20:03:02.009483    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:03:02.009483    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:03:02.009483    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:03:02.009580    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:03:02.009580    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:03:02 GMT
	I0807 20:03:02.009635    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7l6v2","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7de73f9c-93d9-46c6-ae10-b253dd257a19","resourceVersion":"2034","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"26f3775d-afda-4408-b967-dc333bdd23fc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26f3775d-afda-4408-b967-dc333bdd23fc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6788 chars]
	I0807 20:03:02.010550    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:03:02.010550    1172 round_trippers.go:469] Request Headers:
	I0807 20:03:02.010550    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:03:02.010550    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:03:02.015692    1172 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 20:03:02.015769    1172 round_trippers.go:577] Response Headers:
	I0807 20:03:02.015769    1172 round_trippers.go:580]     Audit-Id: 4822affe-a42e-41bc-bf44-8b144609c799
	I0807 20:03:02.015769    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:03:02.015769    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:03:02.015769    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:03:02.015845    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:03:02.015865    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:03:02 GMT
	I0807 20:03:02.017474    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"2010","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0807 20:03:02.017670    1172 pod_ready.go:92] pod "coredns-7db6d8ff4d-7l6v2" in "kube-system" namespace has status "Ready":"True"
	I0807 20:03:02.017670    1172 pod_ready.go:81] duration metric: took 14.5198049s for pod "coredns-7db6d8ff4d-7l6v2" in "kube-system" namespace to be "Ready" ...
	I0807 20:03:02.017670    1172 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-116700" in "kube-system" namespace to be "Ready" ...
	I0807 20:03:02.017670    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-116700
	I0807 20:03:02.017670    1172 round_trippers.go:469] Request Headers:
	I0807 20:03:02.017670    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:03:02.017670    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:03:02.020674    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:03:02.020674    1172 round_trippers.go:577] Response Headers:
	I0807 20:03:02.020674    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:03:02 GMT
	I0807 20:03:02.020674    1172 round_trippers.go:580]     Audit-Id: cd777aa6-9437-407b-af59-45654df48fb7
	I0807 20:03:02.020674    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:03:02.020674    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:03:02.020674    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:03:02.020674    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:03:02.020674    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-116700","namespace":"kube-system","uid":"822f1e63-7c8a-4172-927c-32f4e0b5d505","resourceVersion":"1992","creationTimestamp":"2024-08-07T20:02:27Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.28.226.95:2379","kubernetes.io/config.hash":"9eecaca34ea754a7954ea8f568cb96d3","kubernetes.io/config.mirror":"9eecaca34ea754a7954ea8f568cb96d3","kubernetes.io/config.seen":"2024-08-07T20:02:20.493455845Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-07T20:02:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6160 chars]
	I0807 20:03:02.020674    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:03:02.021793    1172 round_trippers.go:469] Request Headers:
	I0807 20:03:02.021793    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:03:02.021828    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:03:02.024423    1172 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 20:03:02.024423    1172 round_trippers.go:577] Response Headers:
	I0807 20:03:02.024423    1172 round_trippers.go:580]     Audit-Id: 12aa9d7c-bc74-476a-bda2-43dfad5e450f
	I0807 20:03:02.024423    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:03:02.024423    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:03:02.024423    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:03:02.024423    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:03:02.024423    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:03:02 GMT
	I0807 20:03:02.025412    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"2010","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0807 20:03:02.025412    1172 pod_ready.go:92] pod "etcd-multinode-116700" in "kube-system" namespace has status "Ready":"True"
	I0807 20:03:02.025412    1172 pod_ready.go:81] duration metric: took 7.7428ms for pod "etcd-multinode-116700" in "kube-system" namespace to be "Ready" ...
	I0807 20:03:02.025412    1172 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-116700" in "kube-system" namespace to be "Ready" ...
	I0807 20:03:02.025412    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-116700
	I0807 20:03:02.025412    1172 round_trippers.go:469] Request Headers:
	I0807 20:03:02.025412    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:03:02.025412    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:03:02.029431    1172 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 20:03:02.029431    1172 round_trippers.go:577] Response Headers:
	I0807 20:03:02.030413    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:03:02.030413    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:03:02 GMT
	I0807 20:03:02.030413    1172 round_trippers.go:580]     Audit-Id: 8e373379-5950-47e8-a440-824a1c6e4524
	I0807 20:03:02.030413    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:03:02.030413    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:03:02.030413    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:03:02.030413    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-116700","namespace":"kube-system","uid":"5111ea6a-eb9d-4e60-bbc5-698a5882a60a","resourceVersion":"1970","creationTimestamp":"2024-08-07T20:02:28Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.28.226.95:8443","kubernetes.io/config.hash":"8066c637edc34431d2657878d0b69f79","kubernetes.io/config.mirror":"8066c637edc34431d2657878d0b69f79","kubernetes.io/config.seen":"2024-08-07T20:02:20.432683231Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-07T20:02:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7695 chars]
	I0807 20:03:02.031525    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:03:02.031557    1172 round_trippers.go:469] Request Headers:
	I0807 20:03:02.031557    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:03:02.031557    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:03:02.034250    1172 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 20:03:02.034250    1172 round_trippers.go:577] Response Headers:
	I0807 20:03:02.034250    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:03:02.034250    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:03:02.034902    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:03:02 GMT
	I0807 20:03:02.034902    1172 round_trippers.go:580]     Audit-Id: a3d39749-5260-4a2f-b7e4-abb93333a3cc
	I0807 20:03:02.034902    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:03:02.034902    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:03:02.035307    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"2010","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0807 20:03:02.035730    1172 pod_ready.go:92] pod "kube-apiserver-multinode-116700" in "kube-system" namespace has status "Ready":"True"
	I0807 20:03:02.035730    1172 pod_ready.go:81] duration metric: took 10.3169ms for pod "kube-apiserver-multinode-116700" in "kube-system" namespace to be "Ready" ...
	I0807 20:03:02.035730    1172 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-116700" in "kube-system" namespace to be "Ready" ...
	I0807 20:03:02.035730    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-116700
	I0807 20:03:02.035730    1172 round_trippers.go:469] Request Headers:
	I0807 20:03:02.035730    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:03:02.035730    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:03:02.038314    1172 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 20:03:02.038314    1172 round_trippers.go:577] Response Headers:
	I0807 20:03:02.038314    1172 round_trippers.go:580]     Audit-Id: a277fb69-5905-4e8f-bc9f-895997a657a5
	I0807 20:03:02.038314    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:03:02.038314    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:03:02.038314    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:03:02.038314    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:03:02.038314    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:03:02 GMT
	I0807 20:03:02.041299    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-116700","namespace":"kube-system","uid":"4d2e8250-9b12-4277-8834-515c1621fc78","resourceVersion":"1960","creationTimestamp":"2024-08-07T19:37:39Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ef62d358a9b469de2443e4a4f620921d","kubernetes.io/config.mirror":"ef62d358a9b469de2443e4a4f620921d","kubernetes.io/config.seen":"2024-08-07T19:37:39.552053960Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7470 chars]
	I0807 20:03:02.042746    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:03:02.042804    1172 round_trippers.go:469] Request Headers:
	I0807 20:03:02.042804    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:03:02.042804    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:03:02.045319    1172 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 20:03:02.045632    1172 round_trippers.go:577] Response Headers:
	I0807 20:03:02.045632    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:03:02.045632    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:03:02.045632    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:03:02.045632    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:03:02.045632    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:03:02 GMT
	I0807 20:03:02.045632    1172 round_trippers.go:580]     Audit-Id: 5b954eda-be05-4299-8f48-046b0ac1561a
	I0807 20:03:02.045632    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"2010","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0807 20:03:02.046537    1172 pod_ready.go:92] pod "kube-controller-manager-multinode-116700" in "kube-system" namespace has status "Ready":"True"
	I0807 20:03:02.046615    1172 pod_ready.go:81] duration metric: took 10.885ms for pod "kube-controller-manager-multinode-116700" in "kube-system" namespace to be "Ready" ...
	I0807 20:03:02.046615    1172 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4lnjd" in "kube-system" namespace to be "Ready" ...
	I0807 20:03:02.046697    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4lnjd
	I0807 20:03:02.046796    1172 round_trippers.go:469] Request Headers:
	I0807 20:03:02.046796    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:03:02.046827    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:03:02.050166    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:03:02.050166    1172 round_trippers.go:577] Response Headers:
	I0807 20:03:02.050166    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:03:02.050166    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:03:02.050166    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:03:02.050166    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:03:02.050166    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:03:02 GMT
	I0807 20:03:02.050166    1172 round_trippers.go:580]     Audit-Id: 9ba0a788-5cec-4a48-946a-54e9ebcce385
	I0807 20:03:02.050166    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-4lnjd","generateName":"kube-proxy-","namespace":"kube-system","uid":"254c1a93-f57b-4997-a3a1-d5f145f7c549","resourceVersion":"1843","creationTimestamp":"2024-08-07T19:46:10Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d82856c0-3330-4ab9-b7bf-54ed48646bce","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:46:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d82856c0-3330-4ab9-b7bf-54ed48646bce\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6067 chars]
	I0807 20:03:02.051211    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700-m03
	I0807 20:03:02.051277    1172 round_trippers.go:469] Request Headers:
	I0807 20:03:02.051313    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:03:02.051340    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:03:02.054188    1172 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 20:03:02.055057    1172 round_trippers.go:577] Response Headers:
	I0807 20:03:02.055057    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:03:02 GMT
	I0807 20:03:02.055057    1172 round_trippers.go:580]     Audit-Id: 200816e2-1bfa-4f25-b8ec-01d896a5a1f0
	I0807 20:03:02.055057    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:03:02.055057    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:03:02.055057    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:03:02.055057    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:03:02.055308    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700-m03","uid":"9ade310d-2eba-4d92-8b38-64ccda5e080c","resourceVersion":"2012","creationTimestamp":"2024-08-07T19:57:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_07T19_57_34_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:57:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4400 chars]
	I0807 20:03:02.055634    1172 pod_ready.go:97] node "multinode-116700-m03" hosting pod "kube-proxy-4lnjd" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-116700-m03" has status "Ready":"Unknown"
	I0807 20:03:02.055634    1172 pod_ready.go:81] duration metric: took 9.0194ms for pod "kube-proxy-4lnjd" in "kube-system" namespace to be "Ready" ...
	E0807 20:03:02.055634    1172 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-116700-m03" hosting pod "kube-proxy-4lnjd" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-116700-m03" has status "Ready":"Unknown"
	I0807 20:03:02.055634    1172 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fmjt9" in "kube-system" namespace to be "Ready" ...
	I0807 20:03:02.208640    1172 request.go:629] Waited for 152.9204ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fmjt9
	I0807 20:03:02.208946    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fmjt9
	I0807 20:03:02.208946    1172 round_trippers.go:469] Request Headers:
	I0807 20:03:02.208946    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:03:02.208946    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:03:02.212345    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:03:02.212345    1172 round_trippers.go:577] Response Headers:
	I0807 20:03:02.213368    1172 round_trippers.go:580]     Audit-Id: 5da7ba5d-119a-4259-a9e5-8b876f17c7b7
	I0807 20:03:02.213368    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:03:02.213402    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:03:02.213402    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:03:02.213402    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:03:02.213402    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:03:02 GMT
	I0807 20:03:02.213482    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-fmjt9","generateName":"kube-proxy-","namespace":"kube-system","uid":"766df91e-8fd0-457b-8c11-8810059ca4d9","resourceVersion":"1952","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d82856c0-3330-4ab9-b7bf-54ed48646bce","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d82856c0-3330-4ab9-b7bf-54ed48646bce\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6034 chars]
	I0807 20:03:02.410781    1172 request.go:629] Waited for 196.5313ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:03:02.411097    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:03:02.411097    1172 round_trippers.go:469] Request Headers:
	I0807 20:03:02.411097    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:03:02.411097    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:03:02.413710    1172 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 20:03:02.413710    1172 round_trippers.go:577] Response Headers:
	I0807 20:03:02.414465    1172 round_trippers.go:580]     Audit-Id: 03053c05-0c5c-4156-9b9d-9a6521f1e111
	I0807 20:03:02.414465    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:03:02.414465    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:03:02.414465    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:03:02.414465    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:03:02.414465    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:03:02 GMT
	I0807 20:03:02.414832    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"2010","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0807 20:03:02.415427    1172 pod_ready.go:92] pod "kube-proxy-fmjt9" in "kube-system" namespace has status "Ready":"True"
	I0807 20:03:02.415526    1172 pod_ready.go:81] duration metric: took 359.8876ms for pod "kube-proxy-fmjt9" in "kube-system" namespace to be "Ready" ...
	I0807 20:03:02.415526    1172 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vcb7n" in "kube-system" namespace to be "Ready" ...
	I0807 20:03:02.613599    1172 request.go:629] Waited for 197.9983ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vcb7n
	I0807 20:03:02.613599    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vcb7n
	I0807 20:03:02.613846    1172 round_trippers.go:469] Request Headers:
	I0807 20:03:02.613846    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:03:02.613846    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:03:02.618143    1172 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 20:03:02.618143    1172 round_trippers.go:577] Response Headers:
	I0807 20:03:02.618143    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:03:02.619156    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:03:02.619156    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:03:02.619156    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:03:02.619183    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:03:02 GMT
	I0807 20:03:02.619183    1172 round_trippers.go:580]     Audit-Id: 157478b2-c889-4f1d-9e0c-1388ce8e9c9b
	I0807 20:03:02.619356    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-vcb7n","generateName":"kube-proxy-","namespace":"kube-system","uid":"d8d87ad6-19cc-45fa-8c9f-1a862fec4e59","resourceVersion":"661","creationTimestamp":"2024-08-07T19:41:07Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d82856c0-3330-4ab9-b7bf-54ed48646bce","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:41:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d82856c0-3330-4ab9-b7bf-54ed48646bce\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5836 chars]
	I0807 20:03:02.816583    1172 request.go:629] Waited for 196.3547ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.226.95:8443/api/v1/nodes/multinode-116700-m02
	I0807 20:03:02.816583    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700-m02
	I0807 20:03:02.816824    1172 round_trippers.go:469] Request Headers:
	I0807 20:03:02.816824    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:03:02.816824    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:03:02.820114    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:03:02.820190    1172 round_trippers.go:577] Response Headers:
	I0807 20:03:02.820190    1172 round_trippers.go:580]     Audit-Id: d642448a-b7cd-41f5-a272-2aaa5e7a1c22
	I0807 20:03:02.820190    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:03:02.820190    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:03:02.820190    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:03:02.820190    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:03:02.820190    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:03:02 GMT
	I0807 20:03:02.820543    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700-m02","uid":"95fa38ae-e99d-47d4-a12c-06eb42d66eb3","resourceVersion":"1754","creationTimestamp":"2024-08-07T19:41:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_07T19_41_08_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:41:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3826 chars]
	I0807 20:03:02.821167    1172 pod_ready.go:92] pod "kube-proxy-vcb7n" in "kube-system" namespace has status "Ready":"True"
	I0807 20:03:02.821167    1172 pod_ready.go:81] duration metric: took 405.6359ms for pod "kube-proxy-vcb7n" in "kube-system" namespace to be "Ready" ...
	I0807 20:03:02.821167    1172 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-116700" in "kube-system" namespace to be "Ready" ...
	I0807 20:03:03.019604    1172 request.go:629] Waited for 198.3351ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-116700
	I0807 20:03:03.019845    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-116700
	I0807 20:03:03.020098    1172 round_trippers.go:469] Request Headers:
	I0807 20:03:03.020179    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:03:03.020522    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:03:03.025162    1172 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 20:03:03.025162    1172 round_trippers.go:577] Response Headers:
	I0807 20:03:03.025860    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:03:03.025860    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:03:03.025860    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:03:03.025860    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:03:03 GMT
	I0807 20:03:03.025860    1172 round_trippers.go:580]     Audit-Id: 373f633d-00b8-4723-b5d1-57e5fa7fb3e3
	I0807 20:03:03.025860    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:03:03.026273    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-116700","namespace":"kube-system","uid":"7b6df7b7-8c94-498a-bc4c-74d72efd572a","resourceVersion":"1996","creationTimestamp":"2024-08-07T19:37:39Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"fde91c95fce6faff219ccfa4b0b2484c","kubernetes.io/config.mirror":"fde91c95fce6faff219ccfa4b0b2484c","kubernetes.io/config.seen":"2024-08-07T19:37:39.552047359Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5200 chars]
	I0807 20:03:03.207819    1172 request.go:629] Waited for 180.795ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:03:03.207819    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:03:03.207819    1172 round_trippers.go:469] Request Headers:
	I0807 20:03:03.207819    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:03:03.207819    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:03:03.212461    1172 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 20:03:03.212461    1172 round_trippers.go:577] Response Headers:
	I0807 20:03:03.212461    1172 round_trippers.go:580]     Audit-Id: 1976ae5c-cb83-4f09-9992-eaf24de0b5c0
	I0807 20:03:03.212461    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:03:03.212845    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:03:03.212845    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:03:03.212845    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:03:03.212845    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:03:03 GMT
	I0807 20:03:03.213233    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"2010","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0807 20:03:03.214072    1172 pod_ready.go:92] pod "kube-scheduler-multinode-116700" in "kube-system" namespace has status "Ready":"True"
	I0807 20:03:03.214170    1172 pod_ready.go:81] duration metric: took 392.9977ms for pod "kube-scheduler-multinode-116700" in "kube-system" namespace to be "Ready" ...
	I0807 20:03:03.214170    1172 pod_ready.go:38] duration metric: took 15.7317317s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0807 20:03:03.214267    1172 api_server.go:52] waiting for apiserver process to appear ...
	I0807 20:03:03.228733    1172 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0807 20:03:03.258795    1172 command_runner.go:130] > 1971
	I0807 20:03:03.258795    1172 api_server.go:72] duration metric: took 31.5985841s to wait for apiserver process to appear ...
	I0807 20:03:03.258795    1172 api_server.go:88] waiting for apiserver healthz status ...
	I0807 20:03:03.258962    1172 api_server.go:253] Checking apiserver healthz at https://172.28.226.95:8443/healthz ...
	I0807 20:03:03.266616    1172 api_server.go:279] https://172.28.226.95:8443/healthz returned 200:
	ok
	I0807 20:03:03.266900    1172 round_trippers.go:463] GET https://172.28.226.95:8443/version
	I0807 20:03:03.266946    1172 round_trippers.go:469] Request Headers:
	I0807 20:03:03.266946    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:03:03.266978    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:03:03.268783    1172 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0807 20:03:03.268783    1172 round_trippers.go:577] Response Headers:
	I0807 20:03:03.268783    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:03:03.269325    1172 round_trippers.go:580]     Content-Length: 263
	I0807 20:03:03.269325    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:03:03 GMT
	I0807 20:03:03.269325    1172 round_trippers.go:580]     Audit-Id: f4a30aa2-293b-493a-93e5-1d6e247793fc
	I0807 20:03:03.269325    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:03:03.269325    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:03:03.269325    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:03:03.269325    1172 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.3",
	  "gitCommit": "6fc0a69044f1ac4c13841ec4391224a2df241460",
	  "gitTreeState": "clean",
	  "buildDate": "2024-07-16T23:48:12Z",
	  "goVersion": "go1.22.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0807 20:03:03.269566    1172 api_server.go:141] control plane version: v1.30.3
	I0807 20:03:03.269608    1172 api_server.go:131] duration metric: took 10.6457ms to wait for apiserver health ...
	I0807 20:03:03.269608    1172 system_pods.go:43] waiting for kube-system pods to appear ...
	I0807 20:03:03.411965    1172 request.go:629] Waited for 142.1045ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods
	I0807 20:03:03.412241    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods
	I0807 20:03:03.412241    1172 round_trippers.go:469] Request Headers:
	I0807 20:03:03.412337    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:03:03.412410    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:03:03.418723    1172 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0807 20:03:03.419648    1172 round_trippers.go:577] Response Headers:
	I0807 20:03:03.419648    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:03:03 GMT
	I0807 20:03:03.419648    1172 round_trippers.go:580]     Audit-Id: 535244d1-8f7f-4c0f-a8e7-4c7e55c46053
	I0807 20:03:03.419648    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:03:03.419648    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:03:03.419648    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:03:03.419648    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:03:03.421579    1172 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"2039"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-7l6v2","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7de73f9c-93d9-46c6-ae10-b253dd257a19","resourceVersion":"2034","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"26f3775d-afda-4408-b967-dc333bdd23fc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26f3775d-afda-4408-b967-dc333bdd23fc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86523 chars]
	I0807 20:03:03.425433    1172 system_pods.go:59] 12 kube-system pods found
	I0807 20:03:03.425433    1172 system_pods.go:61] "coredns-7db6d8ff4d-7l6v2" [7de73f9c-93d9-46c6-ae10-b253dd257a19] Running
	I0807 20:03:03.425433    1172 system_pods.go:61] "etcd-multinode-116700" [822f1e63-7c8a-4172-927c-32f4e0b5d505] Running
	I0807 20:03:03.425433    1172 system_pods.go:61] "kindnet-gk542" [bad4e2c3-505e-4175-9a5b-186a1874ff8d] Running
	I0807 20:03:03.425433    1172 system_pods.go:61] "kindnet-gsjlq" [7dac93b0-0cfa-4d64-a437-ce92de8bf57d] Running
	I0807 20:03:03.425433    1172 system_pods.go:61] "kindnet-kltmx" [b2ddfdd4-b957-45e3-b967-cf2650e86069] Running
	I0807 20:03:03.425433    1172 system_pods.go:61] "kube-apiserver-multinode-116700" [5111ea6a-eb9d-4e60-bbc5-698a5882a60a] Running
	I0807 20:03:03.425433    1172 system_pods.go:61] "kube-controller-manager-multinode-116700" [4d2e8250-9b12-4277-8834-515c1621fc78] Running
	I0807 20:03:03.425433    1172 system_pods.go:61] "kube-proxy-4lnjd" [254c1a93-f57b-4997-a3a1-d5f145f7c549] Running
	I0807 20:03:03.425433    1172 system_pods.go:61] "kube-proxy-fmjt9" [766df91e-8fd0-457b-8c11-8810059ca4d9] Running
	I0807 20:03:03.425433    1172 system_pods.go:61] "kube-proxy-vcb7n" [d8d87ad6-19cc-45fa-8c9f-1a862fec4e59] Running
	I0807 20:03:03.425433    1172 system_pods.go:61] "kube-scheduler-multinode-116700" [7b6df7b7-8c94-498a-bc4c-74d72efd572a] Running
	I0807 20:03:03.425433    1172 system_pods.go:61] "storage-provisioner" [8a8036f6-f1a0-4fca-b8dd-ed99c3535b47] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0807 20:03:03.425965    1172 system_pods.go:74] duration metric: took 156.3553ms to wait for pod list to return data ...
	I0807 20:03:03.425965    1172 default_sa.go:34] waiting for default service account to be created ...
	I0807 20:03:03.614865    1172 request.go:629] Waited for 188.4012ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.226.95:8443/api/v1/namespaces/default/serviceaccounts
	I0807 20:03:03.614865    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/default/serviceaccounts
	I0807 20:03:03.614865    1172 round_trippers.go:469] Request Headers:
	I0807 20:03:03.614865    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:03:03.614865    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:03:03.619263    1172 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 20:03:03.619676    1172 round_trippers.go:577] Response Headers:
	I0807 20:03:03.619676    1172 round_trippers.go:580]     Audit-Id: 2b4430e2-794e-4a1d-99de-30c7c4731427
	I0807 20:03:03.619676    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:03:03.619676    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:03:03.619676    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:03:03.619676    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:03:03.619676    1172 round_trippers.go:580]     Content-Length: 262
	I0807 20:03:03.619787    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:03:03 GMT
	I0807 20:03:03.619787    1172 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"2039"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"f9ade84e-dceb-49d5-8e06-66799b7c129c","resourceVersion":"345","creationTimestamp":"2024-08-07T19:37:52Z"}}]}
	I0807 20:03:03.620243    1172 default_sa.go:45] found service account: "default"
	I0807 20:03:03.620332    1172 default_sa.go:55] duration metric: took 194.2755ms for default service account to be created ...
	I0807 20:03:03.620332    1172 system_pods.go:116] waiting for k8s-apps to be running ...
	I0807 20:03:03.817757    1172 request.go:629] Waited for 197.0137ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods
	I0807 20:03:03.817858    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods
	I0807 20:03:03.817858    1172 round_trippers.go:469] Request Headers:
	I0807 20:03:03.817858    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:03:03.817858    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:03:03.825995    1172 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0807 20:03:03.825995    1172 round_trippers.go:577] Response Headers:
	I0807 20:03:03.826958    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:03:03.826958    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:03:03.826958    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:03:03.826958    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:03:03 GMT
	I0807 20:03:03.826958    1172 round_trippers.go:580]     Audit-Id: 037451bc-8c89-4251-80a8-fba82a981de3
	I0807 20:03:03.826958    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:03:03.828763    1172 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"2039"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-7l6v2","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7de73f9c-93d9-46c6-ae10-b253dd257a19","resourceVersion":"2034","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"26f3775d-afda-4408-b967-dc333bdd23fc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26f3775d-afda-4408-b967-dc333bdd23fc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86523 chars]
	I0807 20:03:03.834181    1172 system_pods.go:86] 12 kube-system pods found
	I0807 20:03:03.834181    1172 system_pods.go:89] "coredns-7db6d8ff4d-7l6v2" [7de73f9c-93d9-46c6-ae10-b253dd257a19] Running
	I0807 20:03:03.834181    1172 system_pods.go:89] "etcd-multinode-116700" [822f1e63-7c8a-4172-927c-32f4e0b5d505] Running
	I0807 20:03:03.834181    1172 system_pods.go:89] "kindnet-gk542" [bad4e2c3-505e-4175-9a5b-186a1874ff8d] Running
	I0807 20:03:03.834181    1172 system_pods.go:89] "kindnet-gsjlq" [7dac93b0-0cfa-4d64-a437-ce92de8bf57d] Running
	I0807 20:03:03.834181    1172 system_pods.go:89] "kindnet-kltmx" [b2ddfdd4-b957-45e3-b967-cf2650e86069] Running
	I0807 20:03:03.834181    1172 system_pods.go:89] "kube-apiserver-multinode-116700" [5111ea6a-eb9d-4e60-bbc5-698a5882a60a] Running
	I0807 20:03:03.834181    1172 system_pods.go:89] "kube-controller-manager-multinode-116700" [4d2e8250-9b12-4277-8834-515c1621fc78] Running
	I0807 20:03:03.834181    1172 system_pods.go:89] "kube-proxy-4lnjd" [254c1a93-f57b-4997-a3a1-d5f145f7c549] Running
	I0807 20:03:03.834181    1172 system_pods.go:89] "kube-proxy-fmjt9" [766df91e-8fd0-457b-8c11-8810059ca4d9] Running
	I0807 20:03:03.834181    1172 system_pods.go:89] "kube-proxy-vcb7n" [d8d87ad6-19cc-45fa-8c9f-1a862fec4e59] Running
	I0807 20:03:03.834181    1172 system_pods.go:89] "kube-scheduler-multinode-116700" [7b6df7b7-8c94-498a-bc4c-74d72efd572a] Running
	I0807 20:03:03.834181    1172 system_pods.go:89] "storage-provisioner" [8a8036f6-f1a0-4fca-b8dd-ed99c3535b47] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0807 20:03:03.834181    1172 system_pods.go:126] duration metric: took 213.8466ms to wait for k8s-apps to be running ...
	I0807 20:03:03.834181    1172 system_svc.go:44] waiting for kubelet service to be running ....
	I0807 20:03:03.848868    1172 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0807 20:03:03.875825    1172 system_svc.go:56] duration metric: took 41.6429ms WaitForService to wait for kubelet
	I0807 20:03:03.875825    1172 kubeadm.go:582] duration metric: took 32.2156058s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0807 20:03:03.875825    1172 node_conditions.go:102] verifying NodePressure condition ...
	I0807 20:03:04.005323    1172 request.go:629] Waited for 129.3724ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.226.95:8443/api/v1/nodes
	I0807 20:03:04.005593    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes
	I0807 20:03:04.005749    1172 round_trippers.go:469] Request Headers:
	I0807 20:03:04.005749    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:03:04.005749    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:03:04.011071    1172 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 20:03:04.011899    1172 round_trippers.go:577] Response Headers:
	I0807 20:03:04.011899    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:03:04.011899    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:03:04.011899    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:03:04.011899    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:03:04 GMT
	I0807 20:03:04.011899    1172 round_trippers.go:580]     Audit-Id: 7803b889-490f-4885-8672-d15f9f19f7aa
	I0807 20:03:04.011899    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:03:04.012689    1172 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"2039"},"items":[{"metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"2010","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15502 chars]
	I0807 20:03:04.013651    1172 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0807 20:03:04.013706    1172 node_conditions.go:123] node cpu capacity is 2
	I0807 20:03:04.013706    1172 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0807 20:03:04.013706    1172 node_conditions.go:123] node cpu capacity is 2
	I0807 20:03:04.013706    1172 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0807 20:03:04.013771    1172 node_conditions.go:123] node cpu capacity is 2
	I0807 20:03:04.013771    1172 node_conditions.go:105] duration metric: took 137.9447ms to run NodePressure ...
	I0807 20:03:04.013771    1172 start.go:241] waiting for startup goroutines ...
	I0807 20:03:04.013771    1172 start.go:246] waiting for cluster config update ...
	I0807 20:03:04.013771    1172 start.go:255] writing updated cluster config ...
	I0807 20:03:04.018293    1172 out.go:177] 
	I0807 20:03:04.021879    1172 config.go:182] Loaded profile config "ha-766300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 20:03:04.028607    1172 config.go:182] Loaded profile config "multinode-116700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 20:03:04.029218    1172 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\config.json ...
	I0807 20:03:04.035607    1172 out.go:177] * Starting "multinode-116700-m02" worker node in "multinode-116700" cluster
	I0807 20:03:04.037608    1172 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0807 20:03:04.037608    1172 cache.go:56] Caching tarball of preloaded images
	I0807 20:03:04.038771    1172 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0807 20:03:04.038946    1172 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0807 20:03:04.039004    1172 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\config.json ...
	I0807 20:03:04.041079    1172 start.go:360] acquireMachinesLock for multinode-116700-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 20:03:04.041079    1172 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-116700-m02"
	I0807 20:03:04.041079    1172 start.go:96] Skipping create...Using existing machine configuration
	I0807 20:03:04.041079    1172 fix.go:54] fixHost starting: m02
	I0807 20:03:04.042523    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700-m02 ).state
	I0807 20:03:06.270197    1172 main.go:141] libmachine: [stdout =====>] : Off
	
	I0807 20:03:06.271149    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:03:06.271249    1172 fix.go:112] recreateIfNeeded on multinode-116700-m02: state=Stopped err=<nil>
	W0807 20:03:06.271249    1172 fix.go:138] unexpected machine state, will restart: <nil>
	I0807 20:03:06.277623    1172 out.go:177] * Restarting existing hyperv VM for "multinode-116700-m02" ...
	I0807 20:03:06.280612    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-116700-m02
	I0807 20:03:09.506197    1172 main.go:141] libmachine: [stdout =====>] : 
	I0807 20:03:09.506197    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:03:09.506197    1172 main.go:141] libmachine: Waiting for host to start...
	I0807 20:03:09.506197    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700-m02 ).state
	I0807 20:03:11.872665    1172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 20:03:11.872665    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:03:11.872821    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 20:03:14.501913    1172 main.go:141] libmachine: [stdout =====>] : 
	I0807 20:03:14.501913    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:03:15.512262    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700-m02 ).state
	I0807 20:03:17.830256    1172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 20:03:17.830642    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:03:17.830642    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 20:03:20.520797    1172 main.go:141] libmachine: [stdout =====>] : 
	I0807 20:03:20.520953    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:03:21.522707    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700-m02 ).state
	I0807 20:03:23.798260    1172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 20:03:23.798260    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:03:23.798260    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 20:03:26.469273    1172 main.go:141] libmachine: [stdout =====>] : 
	I0807 20:03:26.469273    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:03:27.480907    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700-m02 ).state
	I0807 20:03:29.810988    1172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 20:03:29.810988    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:03:29.811777    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 20:03:32.435693    1172 main.go:141] libmachine: [stdout =====>] : 
	I0807 20:03:32.435693    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:03:33.450597    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700-m02 ).state
	I0807 20:03:35.767053    1172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 20:03:35.767053    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:03:35.767799    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 20:03:38.415674    1172 main.go:141] libmachine: [stdout =====>] : 172.28.235.119
	
	I0807 20:03:38.415674    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:03:38.418845    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700-m02 ).state
	I0807 20:03:40.702676    1172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 20:03:40.702676    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:03:40.702676    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 20:03:43.362301    1172 main.go:141] libmachine: [stdout =====>] : 172.28.235.119
	
	I0807 20:03:43.362301    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:03:43.362301    1172 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\config.json ...
	I0807 20:03:43.365389    1172 machine.go:94] provisionDockerMachine start ...
	I0807 20:03:43.365503    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700-m02 ).state
	I0807 20:03:45.726529    1172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 20:03:45.726889    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:03:45.726889    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 20:03:48.521577    1172 main.go:141] libmachine: [stdout =====>] : 172.28.235.119
	
	I0807 20:03:48.522042    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:03:48.526893    1172 main.go:141] libmachine: Using SSH client type: native
	I0807 20:03:48.527755    1172 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.235.119 22 <nil> <nil>}
	I0807 20:03:48.527755    1172 main.go:141] libmachine: About to run SSH command:
	hostname
	I0807 20:03:48.658956    1172 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0807 20:03:48.659055    1172 buildroot.go:166] provisioning hostname "multinode-116700-m02"
	I0807 20:03:48.659055    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700-m02 ).state
	I0807 20:03:51.081900    1172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 20:03:51.081900    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:03:51.082286    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 20:03:53.881840    1172 main.go:141] libmachine: [stdout =====>] : 172.28.235.119
	
	I0807 20:03:53.882424    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:03:53.887608    1172 main.go:141] libmachine: Using SSH client type: native
	I0807 20:03:53.889272    1172 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.235.119 22 <nil> <nil>}
	I0807 20:03:53.889272    1172 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-116700-m02 && echo "multinode-116700-m02" | sudo tee /etc/hostname
	I0807 20:03:54.060253    1172 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-116700-m02
	
	I0807 20:03:54.060295    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700-m02 ).state
	I0807 20:03:56.391326    1172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 20:03:56.391326    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:03:56.392090    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 20:03:59.215553    1172 main.go:141] libmachine: [stdout =====>] : 172.28.235.119
	
	I0807 20:03:59.215553    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:03:59.222927    1172 main.go:141] libmachine: Using SSH client type: native
	I0807 20:03:59.223198    1172 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.235.119 22 <nil> <nil>}
	I0807 20:03:59.223198    1172 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-116700-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-116700-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-116700-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0807 20:03:59.376483    1172 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0807 20:03:59.376483    1172 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0807 20:03:59.376597    1172 buildroot.go:174] setting up certificates
	I0807 20:03:59.376597    1172 provision.go:84] configureAuth start
	I0807 20:03:59.376679    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700-m02 ).state
	I0807 20:04:01.733200    1172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 20:04:01.733200    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:04:01.733786    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 20:04:04.482829    1172 main.go:141] libmachine: [stdout =====>] : 172.28.235.119
	
	I0807 20:04:04.482829    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:04:04.482829    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700-m02 ).state
	I0807 20:04:06.802398    1172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 20:04:06.802841    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:04:06.802898    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 20:04:09.549409    1172 main.go:141] libmachine: [stdout =====>] : 172.28.235.119
	
	I0807 20:04:09.549409    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:04:09.549409    1172 provision.go:143] copyHostCerts
	I0807 20:04:09.549409    1172 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0807 20:04:09.550394    1172 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0807 20:04:09.550394    1172 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0807 20:04:09.550602    1172 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0807 20:04:09.551856    1172 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0807 20:04:09.551856    1172 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0807 20:04:09.551856    1172 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0807 20:04:09.552383    1172 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0807 20:04:09.553341    1172 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0807 20:04:09.553341    1172 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0807 20:04:09.553341    1172 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0807 20:04:09.553341    1172 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0807 20:04:09.554605    1172 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-116700-m02 san=[127.0.0.1 172.28.235.119 localhost minikube multinode-116700-m02]
	I0807 20:04:09.729169    1172 provision.go:177] copyRemoteCerts
	I0807 20:04:09.742026    1172 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0807 20:04:09.742026    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700-m02 ).state
	I0807 20:04:12.025197    1172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 20:04:12.025197    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:04:12.025197    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 20:04:14.699257    1172 main.go:141] libmachine: [stdout =====>] : 172.28.235.119
	
	I0807 20:04:14.699257    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:04:14.700474    1172 sshutil.go:53] new ssh client: &{IP:172.28.235.119 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-116700-m02\id_rsa Username:docker}
	I0807 20:04:14.802751    1172 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.0606608s)
	I0807 20:04:14.802751    1172 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0807 20:04:14.803252    1172 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0807 20:04:14.850202    1172 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0807 20:04:14.850294    1172 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0807 20:04:14.898812    1172 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0807 20:04:14.899393    1172 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0807 20:04:14.951577    1172 provision.go:87] duration metric: took 15.5747826s to configureAuth
	I0807 20:04:14.951577    1172 buildroot.go:189] setting minikube options for container-runtime
	I0807 20:04:14.952586    1172 config.go:182] Loaded profile config "multinode-116700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 20:04:14.952586    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700-m02 ).state
	I0807 20:04:17.228798    1172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 20:04:17.228798    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:04:17.229047    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 20:04:20.030388    1172 main.go:141] libmachine: [stdout =====>] : 172.28.235.119
	
	I0807 20:04:20.030732    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:04:20.037714    1172 main.go:141] libmachine: Using SSH client type: native
	I0807 20:04:20.038713    1172 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.235.119 22 <nil> <nil>}
	I0807 20:04:20.038713    1172 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0807 20:04:20.177841    1172 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0807 20:04:20.177841    1172 buildroot.go:70] root file system type: tmpfs
	I0807 20:04:20.177909    1172 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0807 20:04:20.178199    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700-m02 ).state
	I0807 20:04:22.513557    1172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 20:04:22.513997    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:04:22.514095    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 20:04:25.252126    1172 main.go:141] libmachine: [stdout =====>] : 172.28.235.119
	
	I0807 20:04:25.252472    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:04:25.257666    1172 main.go:141] libmachine: Using SSH client type: native
	I0807 20:04:25.258030    1172 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.235.119 22 <nil> <nil>}
	I0807 20:04:25.258030    1172 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.28.226.95"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0807 20:04:25.418802    1172 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.28.226.95
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0807 20:04:25.418802    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700-m02 ).state
	I0807 20:04:27.643194    1172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 20:04:27.643194    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:04:27.643194    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700-m02 ).networkadapters[0]).ipaddresses[0]

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-windows-amd64.exe node list -p multinode-116700" : exit status 1
multinode_test.go:331: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-116700
multinode_test.go:331: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node list -p multinode-116700: context deadline exceeded (137µs)
multinode_test.go:333: failed to run node list. args "out/minikube-windows-amd64.exe node list -p multinode-116700" : context deadline exceeded
multinode_test.go:338: reported node list is not the same after restart. Before restart: multinode-116700	172.28.224.86
multinode-116700-m02	172.28.226.55
multinode-116700-m03	172.28.226.146

                                                
                                                
After restart: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-116700 -n multinode-116700
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-116700 -n multinode-116700: (12.6648399s)
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-116700 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-116700 logs -n 25: (9.2094924s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                                           Args                                                           |     Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| cp      | multinode-116700 cp testdata\cp-test.txt                                                                                 | multinode-116700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 19:50 UTC | 07 Aug 24 19:50 UTC |
	|         | multinode-116700-m02:/home/docker/cp-test.txt                                                                            |                  |                   |         |                     |                     |
	| ssh     | multinode-116700 ssh -n                                                                                                  | multinode-116700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 19:50 UTC | 07 Aug 24 19:50 UTC |
	|         | multinode-116700-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-116700 cp multinode-116700-m02:/home/docker/cp-test.txt                                                        | multinode-116700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 19:50 UTC | 07 Aug 24 19:50 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile3663502109\001\cp-test_multinode-116700-m02.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-116700 ssh -n                                                                                                  | multinode-116700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 19:50 UTC | 07 Aug 24 19:50 UTC |
	|         | multinode-116700-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-116700 cp multinode-116700-m02:/home/docker/cp-test.txt                                                        | multinode-116700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 19:50 UTC | 07 Aug 24 19:50 UTC |
	|         | multinode-116700:/home/docker/cp-test_multinode-116700-m02_multinode-116700.txt                                          |                  |                   |         |                     |                     |
	| ssh     | multinode-116700 ssh -n                                                                                                  | multinode-116700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 19:51 UTC | 07 Aug 24 19:51 UTC |
	|         | multinode-116700-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-116700 ssh -n multinode-116700 sudo cat                                                                        | multinode-116700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 19:51 UTC | 07 Aug 24 19:51 UTC |
	|         | /home/docker/cp-test_multinode-116700-m02_multinode-116700.txt                                                           |                  |                   |         |                     |                     |
	| cp      | multinode-116700 cp multinode-116700-m02:/home/docker/cp-test.txt                                                        | multinode-116700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 19:51 UTC | 07 Aug 24 19:51 UTC |
	|         | multinode-116700-m03:/home/docker/cp-test_multinode-116700-m02_multinode-116700-m03.txt                                  |                  |                   |         |                     |                     |
	| ssh     | multinode-116700 ssh -n                                                                                                  | multinode-116700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 19:51 UTC | 07 Aug 24 19:51 UTC |
	|         | multinode-116700-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-116700 ssh -n multinode-116700-m03 sudo cat                                                                    | multinode-116700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 19:51 UTC | 07 Aug 24 19:51 UTC |
	|         | /home/docker/cp-test_multinode-116700-m02_multinode-116700-m03.txt                                                       |                  |                   |         |                     |                     |
	| cp      | multinode-116700 cp testdata\cp-test.txt                                                                                 | multinode-116700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 19:51 UTC | 07 Aug 24 19:52 UTC |
	|         | multinode-116700-m03:/home/docker/cp-test.txt                                                                            |                  |                   |         |                     |                     |
	| ssh     | multinode-116700 ssh -n                                                                                                  | multinode-116700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 19:52 UTC | 07 Aug 24 19:52 UTC |
	|         | multinode-116700-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-116700 cp multinode-116700-m03:/home/docker/cp-test.txt                                                        | multinode-116700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 19:52 UTC | 07 Aug 24 19:52 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile3663502109\001\cp-test_multinode-116700-m03.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-116700 ssh -n                                                                                                  | multinode-116700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 19:52 UTC | 07 Aug 24 19:52 UTC |
	|         | multinode-116700-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-116700 cp multinode-116700-m03:/home/docker/cp-test.txt                                                        | multinode-116700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 19:52 UTC | 07 Aug 24 19:52 UTC |
	|         | multinode-116700:/home/docker/cp-test_multinode-116700-m03_multinode-116700.txt                                          |                  |                   |         |                     |                     |
	| ssh     | multinode-116700 ssh -n                                                                                                  | multinode-116700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 19:52 UTC | 07 Aug 24 19:53 UTC |
	|         | multinode-116700-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-116700 ssh -n multinode-116700 sudo cat                                                                        | multinode-116700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 19:53 UTC | 07 Aug 24 19:53 UTC |
	|         | /home/docker/cp-test_multinode-116700-m03_multinode-116700.txt                                                           |                  |                   |         |                     |                     |
	| cp      | multinode-116700 cp multinode-116700-m03:/home/docker/cp-test.txt                                                        | multinode-116700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 19:53 UTC | 07 Aug 24 19:53 UTC |
	|         | multinode-116700-m02:/home/docker/cp-test_multinode-116700-m03_multinode-116700-m02.txt                                  |                  |                   |         |                     |                     |
	| ssh     | multinode-116700 ssh -n                                                                                                  | multinode-116700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 19:53 UTC | 07 Aug 24 19:53 UTC |
	|         | multinode-116700-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-116700 ssh -n multinode-116700-m02 sudo cat                                                                    | multinode-116700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 19:53 UTC | 07 Aug 24 19:53 UTC |
	|         | /home/docker/cp-test_multinode-116700-m03_multinode-116700-m02.txt                                                       |                  |                   |         |                     |                     |
	| node    | multinode-116700 node stop m03                                                                                           | multinode-116700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 19:53 UTC | 07 Aug 24 19:54 UTC |
	| node    | multinode-116700 node start                                                                                              | multinode-116700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 19:55 UTC | 07 Aug 24 19:57 UTC |
	|         | m03 -v=7 --alsologtostderr                                                                                               |                  |                   |         |                     |                     |
	| node    | list -p multinode-116700                                                                                                 | multinode-116700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 19:58 UTC |                     |
	| stop    | -p multinode-116700                                                                                                      | multinode-116700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 19:58 UTC | 07 Aug 24 20:00 UTC |
	| start   | -p multinode-116700                                                                                                      | multinode-116700 | minikube6\jenkins | v1.33.1 | 07 Aug 24 20:00 UTC |                     |
	|         | --wait=true -v=8                                                                                                         |                  |                   |         |                     |                     |
	|         | --alsologtostderr                                                                                                        |                  |                   |         |                     |                     |
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/07 20:00:10
	Running on machine: minikube6
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0807 20:00:10.103540    1172 out.go:291] Setting OutFile to fd 1724 ...
	I0807 20:00:10.104539    1172 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 20:00:10.104539    1172 out.go:304] Setting ErrFile to fd 1728...
	I0807 20:00:10.104539    1172 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 20:00:10.127592    1172 out.go:298] Setting JSON to false
	I0807 20:00:10.131531    1172 start.go:129] hostinfo: {"hostname":"minikube6","uptime":322739,"bootTime":1722738070,"procs":188,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4717 Build 19045.4717","kernelVersion":"10.0.19045.4717 Build 19045.4717","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0807 20:00:10.131531    1172 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0807 20:00:10.177966    1172 out.go:177] * [multinode-116700] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4717 Build 19045.4717
	I0807 20:00:10.279375    1172 notify.go:220] Checking for updates...
	I0807 20:00:10.299518    1172 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0807 20:00:10.328135    1172 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0807 20:00:10.339615    1172 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0807 20:00:10.354482    1172 out.go:177]   - MINIKUBE_LOCATION=19389
	I0807 20:00:10.382547    1172 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0807 20:00:10.392146    1172 config.go:182] Loaded profile config "multinode-116700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 20:00:10.392699    1172 driver.go:392] Setting default libvirt URI to qemu:///system
	I0807 20:00:16.037086    1172 out.go:177] * Using the hyperv driver based on existing profile
	I0807 20:00:16.050435    1172 start.go:297] selected driver: hyperv
	I0807 20:00:16.050435    1172 start.go:901] validating driver "hyperv" against &{Name:multinode-116700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.30.3 ClusterName:multinode-116700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.224.86 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.226.55 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.28.226.146 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:f
alse ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mo
untUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 20:00:16.051557    1172 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0807 20:00:16.109811    1172 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0807 20:00:16.109811    1172 cni.go:84] Creating CNI manager for ""
	I0807 20:00:16.109811    1172 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0807 20:00:16.109811    1172 start.go:340] cluster config:
	{Name:multinode-116700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-116700 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.224.86 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.226.55 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.28.226.146 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisione
r:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 20:00:16.109811    1172 iso.go:125] acquiring lock: {Name:mk51465eaa337f49a286b30986b5f3d5f63e6787 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 20:00:16.140527    1172 out.go:177] * Starting "multinode-116700" primary control-plane node in "multinode-116700" cluster
	I0807 20:00:16.144337    1172 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0807 20:00:16.144451    1172 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0807 20:00:16.144451    1172 cache.go:56] Caching tarball of preloaded images
	I0807 20:00:16.144733    1172 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0807 20:00:16.144733    1172 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0807 20:00:16.145402    1172 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\config.json ...
	I0807 20:00:16.148192    1172 start.go:360] acquireMachinesLock for multinode-116700: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 20:00:16.148277    1172 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-116700"
	I0807 20:00:16.148277    1172 start.go:96] Skipping create...Using existing machine configuration
	I0807 20:00:16.148277    1172 fix.go:54] fixHost starting: 
	I0807 20:00:16.148848    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700 ).state
	I0807 20:00:19.012991    1172 main.go:141] libmachine: [stdout =====>] : Off
	
	I0807 20:00:19.012991    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:00:19.012991    1172 fix.go:112] recreateIfNeeded on multinode-116700: state=Stopped err=<nil>
	W0807 20:00:19.012991    1172 fix.go:138] unexpected machine state, will restart: <nil>
	I0807 20:00:19.016044    1172 out.go:177] * Restarting existing hyperv VM for "multinode-116700" ...
	I0807 20:00:19.020008    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-116700
	I0807 20:00:22.162390    1172 main.go:141] libmachine: [stdout =====>] : 
	I0807 20:00:22.163445    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:00:22.163445    1172 main.go:141] libmachine: Waiting for host to start...
	I0807 20:00:22.163445    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700 ).state
	I0807 20:00:24.521154    1172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 20:00:24.521154    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:00:24.521154    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700 ).networkadapters[0]).ipaddresses[0]
	I0807 20:00:27.104032    1172 main.go:141] libmachine: [stdout =====>] : 
	I0807 20:00:27.104082    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:00:28.118286    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700 ).state
	I0807 20:00:30.414258    1172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 20:00:30.414258    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:00:30.414861    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700 ).networkadapters[0]).ipaddresses[0]
	I0807 20:00:33.039773    1172 main.go:141] libmachine: [stdout =====>] : 
	I0807 20:00:33.039773    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:00:34.054566    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700 ).state
	I0807 20:00:36.376044    1172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 20:00:36.376044    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:00:36.376044    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700 ).networkadapters[0]).ipaddresses[0]
	I0807 20:00:39.072711    1172 main.go:141] libmachine: [stdout =====>] : 
	I0807 20:00:39.072945    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:00:40.075457    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700 ).state
	I0807 20:00:42.436819    1172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 20:00:42.436819    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:00:42.436819    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700 ).networkadapters[0]).ipaddresses[0]
	I0807 20:00:45.086832    1172 main.go:141] libmachine: [stdout =====>] : 
	I0807 20:00:45.086832    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:00:46.100853    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700 ).state
	I0807 20:00:48.407380    1172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 20:00:48.407380    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:00:48.407497    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700 ).networkadapters[0]).ipaddresses[0]
	I0807 20:00:51.060536    1172 main.go:141] libmachine: [stdout =====>] : 172.28.226.95
	
	I0807 20:00:51.060536    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:00:51.064184    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700 ).state
	I0807 20:00:53.361508    1172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 20:00:53.361508    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:00:53.361850    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700 ).networkadapters[0]).ipaddresses[0]
	I0807 20:00:56.108200    1172 main.go:141] libmachine: [stdout =====>] : 172.28.226.95
	
	I0807 20:00:56.108200    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:00:56.109427    1172 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\config.json ...
	I0807 20:00:56.113460    1172 machine.go:94] provisionDockerMachine start ...
	I0807 20:00:56.113591    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700 ).state
	I0807 20:00:58.409696    1172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 20:00:58.409696    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:00:58.410589    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700 ).networkadapters[0]).ipaddresses[0]
	I0807 20:01:01.042695    1172 main.go:141] libmachine: [stdout =====>] : 172.28.226.95
	
	I0807 20:01:01.042695    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:01:01.048860    1172 main.go:141] libmachine: Using SSH client type: native
	I0807 20:01:01.049544    1172 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.226.95 22 <nil> <nil>}
	I0807 20:01:01.049544    1172 main.go:141] libmachine: About to run SSH command:
	hostname
	I0807 20:01:01.183207    1172 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0807 20:01:01.183207    1172 buildroot.go:166] provisioning hostname "multinode-116700"
	I0807 20:01:01.183207    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700 ).state
	I0807 20:01:03.374260    1172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 20:01:03.374260    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:01:03.374260    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700 ).networkadapters[0]).ipaddresses[0]
	I0807 20:01:06.002746    1172 main.go:141] libmachine: [stdout =====>] : 172.28.226.95
	
	I0807 20:01:06.003046    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:01:06.008544    1172 main.go:141] libmachine: Using SSH client type: native
	I0807 20:01:06.008732    1172 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.226.95 22 <nil> <nil>}
	I0807 20:01:06.008732    1172 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-116700 && echo "multinode-116700" | sudo tee /etc/hostname
	I0807 20:01:06.164405    1172 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-116700
	
	I0807 20:01:06.164405    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700 ).state
	I0807 20:01:08.426773    1172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 20:01:08.426773    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:01:08.427327    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700 ).networkadapters[0]).ipaddresses[0]
	I0807 20:01:11.067823    1172 main.go:141] libmachine: [stdout =====>] : 172.28.226.95
	
	I0807 20:01:11.068100    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:01:11.074027    1172 main.go:141] libmachine: Using SSH client type: native
	I0807 20:01:11.074027    1172 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.226.95 22 <nil> <nil>}
	I0807 20:01:11.074550    1172 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-116700' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-116700/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-116700' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0807 20:01:11.233499    1172 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0807 20:01:11.233499    1172 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0807 20:01:11.233640    1172 buildroot.go:174] setting up certificates
	I0807 20:01:11.233640    1172 provision.go:84] configureAuth start
	I0807 20:01:11.233676    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700 ).state
	I0807 20:01:13.441181    1172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 20:01:13.441409    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:01:13.441409    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700 ).networkadapters[0]).ipaddresses[0]
	I0807 20:01:16.040959    1172 main.go:141] libmachine: [stdout =====>] : 172.28.226.95
	
	I0807 20:01:16.040959    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:01:16.040959    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700 ).state
	I0807 20:01:18.311508    1172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 20:01:18.311508    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:01:18.311987    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700 ).networkadapters[0]).ipaddresses[0]
	I0807 20:01:20.973941    1172 main.go:141] libmachine: [stdout =====>] : 172.28.226.95
	
	I0807 20:01:20.973941    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:01:20.973941    1172 provision.go:143] copyHostCerts
	I0807 20:01:20.974270    1172 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0807 20:01:20.974693    1172 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0807 20:01:20.974693    1172 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0807 20:01:20.975393    1172 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0807 20:01:20.976694    1172 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0807 20:01:20.976694    1172 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0807 20:01:20.977307    1172 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0807 20:01:20.977307    1172 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0807 20:01:20.978871    1172 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0807 20:01:20.979404    1172 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0807 20:01:20.979404    1172 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0807 20:01:20.979614    1172 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0807 20:01:20.981267    1172 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-116700 san=[127.0.0.1 172.28.226.95 localhost minikube multinode-116700]
	I0807 20:01:21.124252    1172 provision.go:177] copyRemoteCerts
	I0807 20:01:21.135716    1172 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0807 20:01:21.135716    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700 ).state
	I0807 20:01:23.382416    1172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 20:01:23.382416    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:01:23.382416    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700 ).networkadapters[0]).ipaddresses[0]
	I0807 20:01:26.048404    1172 main.go:141] libmachine: [stdout =====>] : 172.28.226.95
	
	I0807 20:01:26.048404    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:01:26.049688    1172 sshutil.go:53] new ssh client: &{IP:172.28.226.95 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-116700\id_rsa Username:docker}
	I0807 20:01:26.165866    1172 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.0300853s)
	I0807 20:01:26.165866    1172 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0807 20:01:26.166571    1172 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0807 20:01:26.222997    1172 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0807 20:01:26.223813    1172 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0807 20:01:26.268398    1172 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0807 20:01:26.269380    1172 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0807 20:01:26.321927    1172 provision.go:87] duration metric: took 15.0880571s to configureAuth
	I0807 20:01:26.321927    1172 buildroot.go:189] setting minikube options for container-runtime
	I0807 20:01:26.322777    1172 config.go:182] Loaded profile config "multinode-116700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 20:01:26.322777    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700 ).state
	I0807 20:01:28.636835    1172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 20:01:28.636988    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:01:28.637043    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700 ).networkadapters[0]).ipaddresses[0]
	I0807 20:01:31.395949    1172 main.go:141] libmachine: [stdout =====>] : 172.28.226.95
	
	I0807 20:01:31.395949    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:01:31.402593    1172 main.go:141] libmachine: Using SSH client type: native
	I0807 20:01:31.403397    1172 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.226.95 22 <nil> <nil>}
	I0807 20:01:31.403397    1172 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0807 20:01:31.532267    1172 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0807 20:01:31.532406    1172 buildroot.go:70] root file system type: tmpfs
	I0807 20:01:31.532689    1172 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0807 20:01:31.532780    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700 ).state
	I0807 20:01:33.827761    1172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 20:01:33.828069    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:01:33.828159    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700 ).networkadapters[0]).ipaddresses[0]
	I0807 20:01:36.590763    1172 main.go:141] libmachine: [stdout =====>] : 172.28.226.95
	
	I0807 20:01:36.590763    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:01:36.596978    1172 main.go:141] libmachine: Using SSH client type: native
	I0807 20:01:36.597756    1172 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.226.95 22 <nil> <nil>}
	I0807 20:01:36.597756    1172 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0807 20:01:36.750943    1172 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0807 20:01:36.751059    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700 ).state
	I0807 20:01:39.108522    1172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 20:01:39.108522    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:01:39.109412    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700 ).networkadapters[0]).ipaddresses[0]
	I0807 20:01:41.817739    1172 main.go:141] libmachine: [stdout =====>] : 172.28.226.95
	
	I0807 20:01:41.817739    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:01:41.823991    1172 main.go:141] libmachine: Using SSH client type: native
	I0807 20:01:41.824680    1172 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.226.95 22 <nil> <nil>}
	I0807 20:01:41.824680    1172 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0807 20:01:44.481944    1172 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0807 20:01:44.481944    1172 machine.go:97] duration metric: took 48.3678652s to provisionDockerMachine
	I0807 20:01:44.481944    1172 start.go:293] postStartSetup for "multinode-116700" (driver="hyperv")
	I0807 20:01:44.481944    1172 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0807 20:01:44.495249    1172 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0807 20:01:44.495249    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700 ).state
	I0807 20:01:46.673420    1172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 20:01:46.673420    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:01:46.673420    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700 ).networkadapters[0]).ipaddresses[0]
	I0807 20:01:49.339449    1172 main.go:141] libmachine: [stdout =====>] : 172.28.226.95
	
	I0807 20:01:49.340512    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:01:49.341342    1172 sshutil.go:53] new ssh client: &{IP:172.28.226.95 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-116700\id_rsa Username:docker}
	I0807 20:01:49.442382    1172 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9468896s)
	I0807 20:01:49.455670    1172 ssh_runner.go:195] Run: cat /etc/os-release
	I0807 20:01:49.462046    1172 command_runner.go:130] > NAME=Buildroot
	I0807 20:01:49.462046    1172 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0807 20:01:49.462046    1172 command_runner.go:130] > ID=buildroot
	I0807 20:01:49.462046    1172 command_runner.go:130] > VERSION_ID=2023.02.9
	I0807 20:01:49.462046    1172 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0807 20:01:49.462257    1172 info.go:137] Remote host: Buildroot 2023.02.9
	I0807 20:01:49.462363    1172 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0807 20:01:49.462857    1172 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0807 20:01:49.463770    1172 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96602.pem -> 96602.pem in /etc/ssl/certs
	I0807 20:01:49.463839    1172 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96602.pem -> /etc/ssl/certs/96602.pem
	I0807 20:01:49.475789    1172 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0807 20:01:49.492110    1172 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96602.pem --> /etc/ssl/certs/96602.pem (1708 bytes)
	I0807 20:01:49.539608    1172 start.go:296] duration metric: took 5.0575985s for postStartSetup
	I0807 20:01:49.539661    1172 fix.go:56] duration metric: took 1m33.3901884s for fixHost
	I0807 20:01:49.539854    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700 ).state
	I0807 20:01:51.735819    1172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 20:01:51.735819    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:01:51.736786    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700 ).networkadapters[0]).ipaddresses[0]
	I0807 20:01:54.361813    1172 main.go:141] libmachine: [stdout =====>] : 172.28.226.95
	
	I0807 20:01:54.361813    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:01:54.367767    1172 main.go:141] libmachine: Using SSH client type: native
	I0807 20:01:54.368359    1172 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.226.95 22 <nil> <nil>}
	I0807 20:01:54.368497    1172 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0807 20:01:54.489595    1172 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723060914.509642655
	
	I0807 20:01:54.489595    1172 fix.go:216] guest clock: 1723060914.509642655
	I0807 20:01:54.489595    1172 fix.go:229] Guest: 2024-08-07 20:01:54.509642655 +0000 UTC Remote: 2024-08-07 20:01:49.5397594 +0000 UTC m=+99.596668501 (delta=4.969883255s)
	I0807 20:01:54.489795    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700 ).state
	I0807 20:01:56.673033    1172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 20:01:56.673033    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:01:56.673405    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700 ).networkadapters[0]).ipaddresses[0]
	I0807 20:01:59.361130    1172 main.go:141] libmachine: [stdout =====>] : 172.28.226.95
	
	I0807 20:01:59.361850    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:01:59.367136    1172 main.go:141] libmachine: Using SSH client type: native
	I0807 20:01:59.367677    1172 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.226.95 22 <nil> <nil>}
	I0807 20:01:59.367677    1172 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1723060914
	I0807 20:01:59.509330    1172 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Aug  7 20:01:54 UTC 2024
	
	I0807 20:01:59.509330    1172 fix.go:236] clock set: Wed Aug  7 20:01:54 UTC 2024
	 (err=<nil>)
	I0807 20:01:59.509330    1172 start.go:83] releasing machines lock for "multinode-116700", held for 1m43.3597303s
	I0807 20:01:59.509951    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700 ).state
	I0807 20:02:01.692427    1172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 20:02:01.692553    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:02:01.692553    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700 ).networkadapters[0]).ipaddresses[0]
	I0807 20:02:04.315212    1172 main.go:141] libmachine: [stdout =====>] : 172.28.226.95
	
	I0807 20:02:04.315212    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:02:04.319274    1172 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0807 20:02:04.319274    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700 ).state
	I0807 20:02:04.329957    1172 ssh_runner.go:195] Run: cat /version.json
	I0807 20:02:04.330764    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700 ).state
	I0807 20:02:06.604118    1172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 20:02:06.604118    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:02:06.604118    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700 ).networkadapters[0]).ipaddresses[0]
	I0807 20:02:06.620664    1172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 20:02:06.620664    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:02:06.621606    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700 ).networkadapters[0]).ipaddresses[0]
	I0807 20:02:09.382467    1172 main.go:141] libmachine: [stdout =====>] : 172.28.226.95
	
	I0807 20:02:09.383226    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:02:09.383904    1172 sshutil.go:53] new ssh client: &{IP:172.28.226.95 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-116700\id_rsa Username:docker}
	I0807 20:02:09.404504    1172 main.go:141] libmachine: [stdout =====>] : 172.28.226.95
	
	I0807 20:02:09.404504    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:02:09.405082    1172 sshutil.go:53] new ssh client: &{IP:172.28.226.95 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-116700\id_rsa Username:docker}
	I0807 20:02:09.478536    1172 command_runner.go:130] > {"iso_version": "v1.33.1-1722248113-19339", "kicbase_version": "v0.0.44-1721902582-19326", "minikube_version": "v1.33.1", "commit": "b8389556a97747a5bbaa1906d238251ad536d76e"}
	I0807 20:02:09.478657    1172 ssh_runner.go:235] Completed: cat /version.json: (5.1479826s)
	I0807 20:02:09.491319    1172 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0807 20:02:09.492517    1172 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.1731771s)
	W0807 20:02:09.492517    1172 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0807 20:02:09.495298    1172 ssh_runner.go:195] Run: systemctl --version
	I0807 20:02:09.506461    1172 command_runner.go:130] > systemd 252 (252)
	I0807 20:02:09.506461    1172 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0807 20:02:09.520033    1172 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0807 20:02:09.533683    1172 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0807 20:02:09.533811    1172 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0807 20:02:09.546640    1172 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0807 20:02:09.577504    1172 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0807 20:02:09.577944    1172 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0807 20:02:09.577944    1172 start.go:495] detecting cgroup driver to use...
	I0807 20:02:09.578364    1172 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0807 20:02:09.614371    1172 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0807 20:02:09.627908    1172 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0807 20:02:09.659545    1172 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0807 20:02:09.680172    1172 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0807 20:02:09.695965    1172 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0807 20:02:09.728498    1172 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0807 20:02:09.760768    1172 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0807 20:02:09.791840    1172 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0807 20:02:09.821453    1172 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0807 20:02:09.853626    1172 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0807 20:02:09.883121    1172 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0807 20:02:09.915213    1172 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0807 20:02:09.946534    1172 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0807 20:02:09.964465    1172 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0807 20:02:09.976623    1172 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0807 20:02:10.006278    1172 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 20:02:10.232604    1172 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0807 20:02:10.266321    1172 start.go:495] detecting cgroup driver to use...
	I0807 20:02:10.283268    1172 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0807 20:02:10.309588    1172 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0807 20:02:10.309588    1172 command_runner.go:130] > [Unit]
	I0807 20:02:10.309588    1172 command_runner.go:130] > Description=Docker Application Container Engine
	I0807 20:02:10.309588    1172 command_runner.go:130] > Documentation=https://docs.docker.com
	I0807 20:02:10.309588    1172 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0807 20:02:10.309588    1172 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0807 20:02:10.309588    1172 command_runner.go:130] > StartLimitBurst=3
	I0807 20:02:10.309588    1172 command_runner.go:130] > StartLimitIntervalSec=60
	I0807 20:02:10.309588    1172 command_runner.go:130] > [Service]
	I0807 20:02:10.309588    1172 command_runner.go:130] > Type=notify
	I0807 20:02:10.309588    1172 command_runner.go:130] > Restart=on-failure
	I0807 20:02:10.309588    1172 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0807 20:02:10.309828    1172 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0807 20:02:10.309828    1172 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0807 20:02:10.309828    1172 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0807 20:02:10.309828    1172 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0807 20:02:10.309828    1172 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0807 20:02:10.309828    1172 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0807 20:02:10.309963    1172 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0807 20:02:10.309963    1172 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0807 20:02:10.309963    1172 command_runner.go:130] > ExecStart=
	I0807 20:02:10.309963    1172 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0807 20:02:10.309963    1172 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0807 20:02:10.309963    1172 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0807 20:02:10.310089    1172 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0807 20:02:10.310089    1172 command_runner.go:130] > LimitNOFILE=infinity
	I0807 20:02:10.310089    1172 command_runner.go:130] > LimitNPROC=infinity
	I0807 20:02:10.310089    1172 command_runner.go:130] > LimitCORE=infinity
	I0807 20:02:10.310089    1172 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0807 20:02:10.310089    1172 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0807 20:02:10.310089    1172 command_runner.go:130] > TasksMax=infinity
	I0807 20:02:10.310089    1172 command_runner.go:130] > TimeoutStartSec=0
	I0807 20:02:10.310089    1172 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0807 20:02:10.310089    1172 command_runner.go:130] > Delegate=yes
	I0807 20:02:10.310203    1172 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0807 20:02:10.310226    1172 command_runner.go:130] > KillMode=process
	I0807 20:02:10.310226    1172 command_runner.go:130] > [Install]
	I0807 20:02:10.310226    1172 command_runner.go:130] > WantedBy=multi-user.target
	I0807 20:02:10.322608    1172 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0807 20:02:10.358912    1172 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0807 20:02:10.405593    1172 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0807 20:02:10.441549    1172 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0807 20:02:10.473060    1172 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	W0807 20:02:10.543555    1172 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0807 20:02:10.543555    1172 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0807 20:02:10.546508    1172 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0807 20:02:10.572162    1172 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0807 20:02:10.609804    1172 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0807 20:02:10.622535    1172 ssh_runner.go:195] Run: which cri-dockerd
	I0807 20:02:10.628457    1172 command_runner.go:130] > /usr/bin/cri-dockerd
	I0807 20:02:10.639874    1172 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0807 20:02:10.657182    1172 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0807 20:02:10.705090    1172 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0807 20:02:10.906846    1172 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0807 20:02:11.095746    1172 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0807 20:02:11.096131    1172 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0807 20:02:11.144438    1172 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 20:02:11.346499    1172 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0807 20:02:14.064580    1172 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.7179767s)
	I0807 20:02:14.077726    1172 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0807 20:02:14.116085    1172 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0807 20:02:14.151561    1172 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0807 20:02:14.371765    1172 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0807 20:02:14.578435    1172 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 20:02:14.778375    1172 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0807 20:02:14.828395    1172 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0807 20:02:14.871851    1172 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 20:02:15.091292    1172 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0807 20:02:15.194467    1172 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0807 20:02:15.207739    1172 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0807 20:02:15.215931    1172 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0807 20:02:15.216054    1172 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0807 20:02:15.216054    1172 command_runner.go:130] > Device: 0,22	Inode: 850         Links: 1
	I0807 20:02:15.216054    1172 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0807 20:02:15.216112    1172 command_runner.go:130] > Access: 2024-08-07 20:02:15.135634566 +0000
	I0807 20:02:15.216144    1172 command_runner.go:130] > Modify: 2024-08-07 20:02:15.135634566 +0000
	I0807 20:02:15.216144    1172 command_runner.go:130] > Change: 2024-08-07 20:02:15.140634576 +0000
	I0807 20:02:15.216144    1172 command_runner.go:130] >  Birth: -
	I0807 20:02:15.216769    1172 start.go:563] Will wait 60s for crictl version
	I0807 20:02:15.228888    1172 ssh_runner.go:195] Run: which crictl
	I0807 20:02:15.233902    1172 command_runner.go:130] > /usr/bin/crictl
	I0807 20:02:15.245796    1172 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0807 20:02:15.299372    1172 command_runner.go:130] > Version:  0.1.0
	I0807 20:02:15.299372    1172 command_runner.go:130] > RuntimeName:  docker
	I0807 20:02:15.299372    1172 command_runner.go:130] > RuntimeVersion:  27.1.1
	I0807 20:02:15.299372    1172 command_runner.go:130] > RuntimeApiVersion:  v1
	I0807 20:02:15.299372    1172 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.1
	RuntimeApiVersion:  v1
	I0807 20:02:15.309158    1172 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0807 20:02:15.341827    1172 command_runner.go:130] > 27.1.1
	I0807 20:02:15.351138    1172 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0807 20:02:15.381062    1172 command_runner.go:130] > 27.1.1
	I0807 20:02:15.386326    1172 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...
	I0807 20:02:15.387041    1172 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0807 20:02:15.391449    1172 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0807 20:02:15.391449    1172 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0807 20:02:15.391449    1172 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0807 20:02:15.391449    1172 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:f6:3a:6a Flags:up|broadcast|multicast|running}
	I0807 20:02:15.393439    1172 ip.go:210] interface addr: fe80::e7eb:b592:d388:ff99/64
	I0807 20:02:15.394439    1172 ip.go:210] interface addr: 172.28.224.1/20
	I0807 20:02:15.404453    1172 ssh_runner.go:195] Run: grep 172.28.224.1	host.minikube.internal$ /etc/hosts
	I0807 20:02:15.412163    1172 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.28.224.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0807 20:02:15.434123    1172 kubeadm.go:883] updating cluster {Name:multinode-116700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.3 ClusterName:multinode-116700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.226.95 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.226.55 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.28.226.146 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingres
s-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0807 20:02:15.434680    1172 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0807 20:02:15.444525    1172 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0807 20:02:15.470479    1172 command_runner.go:130] > kindest/kindnetd:v20240730-75a5af0c
	I0807 20:02:15.470479    1172 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.3
	I0807 20:02:15.470479    1172 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.3
	I0807 20:02:15.470479    1172 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.3
	I0807 20:02:15.470479    1172 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.3
	I0807 20:02:15.470479    1172 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0807 20:02:15.470479    1172 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0807 20:02:15.470479    1172 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0807 20:02:15.470479    1172 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0807 20:02:15.470479    1172 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0807 20:02:15.470479    1172 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240730-75a5af0c
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0807 20:02:15.470479    1172 docker.go:615] Images already preloaded, skipping extraction
	I0807 20:02:15.480873    1172 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0807 20:02:15.505892    1172 command_runner.go:130] > kindest/kindnetd:v20240730-75a5af0c
	I0807 20:02:15.505892    1172 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.3
	I0807 20:02:15.505892    1172 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.3
	I0807 20:02:15.506917    1172 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.3
	I0807 20:02:15.506917    1172 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.3
	I0807 20:02:15.506917    1172 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0807 20:02:15.506917    1172 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0807 20:02:15.506917    1172 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0807 20:02:15.506917    1172 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0807 20:02:15.506917    1172 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0807 20:02:15.506917    1172 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240730-75a5af0c
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0807 20:02:15.506917    1172 cache_images.go:84] Images are preloaded, skipping loading
	I0807 20:02:15.506917    1172 kubeadm.go:934] updating node { 172.28.226.95 8443 v1.30.3 docker true true} ...
	I0807 20:02:15.506917    1172 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-116700 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.28.226.95
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-116700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0807 20:02:15.514888    1172 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0807 20:02:15.587485    1172 command_runner.go:130] > cgroupfs
	I0807 20:02:15.587949    1172 cni.go:84] Creating CNI manager for ""
	I0807 20:02:15.587949    1172 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0807 20:02:15.587949    1172 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0807 20:02:15.588016    1172 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.28.226.95 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-116700 NodeName:multinode-116700 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.28.226.95"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.28.226.95 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0807 20:02:15.588081    1172 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.28.226.95
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-116700"
	  kubeletExtraArgs:
	    node-ip: 172.28.226.95
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.28.226.95"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0807 20:02:15.599195    1172 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0807 20:02:15.618182    1172 command_runner.go:130] > kubeadm
	I0807 20:02:15.618182    1172 command_runner.go:130] > kubectl
	I0807 20:02:15.618182    1172 command_runner.go:130] > kubelet
	I0807 20:02:15.619191    1172 binaries.go:44] Found k8s binaries, skipping transfer
	I0807 20:02:15.629194    1172 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0807 20:02:15.647235    1172 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0807 20:02:15.678584    1172 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0807 20:02:15.708429    1172 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0807 20:02:15.754528    1172 ssh_runner.go:195] Run: grep 172.28.226.95	control-plane.minikube.internal$ /etc/hosts
	I0807 20:02:15.760235    1172 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.28.226.95	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0807 20:02:15.790352    1172 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 20:02:15.989188    1172 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0807 20:02:16.018324    1172 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700 for IP: 172.28.226.95
	I0807 20:02:16.018324    1172 certs.go:194] generating shared ca certs ...
	I0807 20:02:16.018324    1172 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 20:02:16.019132    1172 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0807 20:02:16.019568    1172 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0807 20:02:16.019568    1172 certs.go:256] generating profile certs ...
	I0807 20:02:16.020293    1172 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\client.key
	I0807 20:02:16.020507    1172 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\apiserver.key.df661a70
	I0807 20:02:16.020507    1172 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\apiserver.crt.df661a70 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.28.226.95]
	I0807 20:02:16.264211    1172 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\apiserver.crt.df661a70 ...
	I0807 20:02:16.264211    1172 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\apiserver.crt.df661a70: {Name:mka21d5154a09762fea20bdb9ae90f9f716422d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 20:02:16.264756    1172 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\apiserver.key.df661a70 ...
	I0807 20:02:16.265772    1172 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\apiserver.key.df661a70: {Name:mk0a2c275254f84e3f2c77c6561fdb3c054cf975 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 20:02:16.266082    1172 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\apiserver.crt.df661a70 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\apiserver.crt
	I0807 20:02:16.279860    1172 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\apiserver.key.df661a70 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\apiserver.key
	I0807 20:02:16.281809    1172 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\proxy-client.key
	I0807 20:02:16.281809    1172 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0807 20:02:16.282023    1172 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0807 20:02:16.282284    1172 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0807 20:02:16.282492    1172 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0807 20:02:16.282819    1172 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0807 20:02:16.283046    1172 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0807 20:02:16.283143    1172 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0807 20:02:16.283276    1172 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0807 20:02:16.283921    1172 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9660.pem (1338 bytes)
	W0807 20:02:16.283921    1172 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9660_empty.pem, impossibly tiny 0 bytes
	I0807 20:02:16.283921    1172 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0807 20:02:16.284613    1172 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0807 20:02:16.284819    1172 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0807 20:02:16.284819    1172 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0807 20:02:16.285700    1172 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96602.pem (1708 bytes)
	I0807 20:02:16.285945    1172 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0807 20:02:16.286109    1172 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9660.pem -> /usr/share/ca-certificates/9660.pem
	I0807 20:02:16.286323    1172 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96602.pem -> /usr/share/ca-certificates/96602.pem
	I0807 20:02:16.287558    1172 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0807 20:02:16.342059    1172 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0807 20:02:16.389999    1172 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0807 20:02:16.440918    1172 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0807 20:02:16.489939    1172 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0807 20:02:16.537204    1172 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0807 20:02:16.583348    1172 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0807 20:02:16.629678    1172 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0807 20:02:16.675675    1172 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0807 20:02:16.722020    1172 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9660.pem --> /usr/share/ca-certificates/9660.pem (1338 bytes)
	I0807 20:02:16.766024    1172 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96602.pem --> /usr/share/ca-certificates/96602.pem (1708 bytes)
	I0807 20:02:16.811014    1172 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0807 20:02:16.860479    1172 ssh_runner.go:195] Run: openssl version
	I0807 20:02:16.869388    1172 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0807 20:02:16.882193    1172 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0807 20:02:16.911911    1172 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0807 20:02:16.919343    1172 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug  7 17:33 /usr/share/ca-certificates/minikubeCA.pem
	I0807 20:02:16.919437    1172 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  7 17:33 /usr/share/ca-certificates/minikubeCA.pem
	I0807 20:02:16.931265    1172 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0807 20:02:16.940164    1172 command_runner.go:130] > b5213941
	I0807 20:02:16.951969    1172 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0807 20:02:16.984942    1172 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9660.pem && ln -fs /usr/share/ca-certificates/9660.pem /etc/ssl/certs/9660.pem"
	I0807 20:02:17.018400    1172 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9660.pem
	I0807 20:02:17.026330    1172 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug  7 17:50 /usr/share/ca-certificates/9660.pem
	I0807 20:02:17.026330    1172 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  7 17:50 /usr/share/ca-certificates/9660.pem
	I0807 20:02:17.038831    1172 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9660.pem
	I0807 20:02:17.047660    1172 command_runner.go:130] > 51391683
	I0807 20:02:17.062636    1172 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9660.pem /etc/ssl/certs/51391683.0"
	I0807 20:02:17.094881    1172 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/96602.pem && ln -fs /usr/share/ca-certificates/96602.pem /etc/ssl/certs/96602.pem"
	I0807 20:02:17.125951    1172 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/96602.pem
	I0807 20:02:17.133000    1172 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug  7 17:50 /usr/share/ca-certificates/96602.pem
	I0807 20:02:17.133000    1172 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  7 17:50 /usr/share/ca-certificates/96602.pem
	I0807 20:02:17.146073    1172 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/96602.pem
	I0807 20:02:17.156012    1172 command_runner.go:130] > 3ec20f2e
	I0807 20:02:17.168183    1172 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/96602.pem /etc/ssl/certs/3ec20f2e.0"
	I0807 20:02:17.197874    1172 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0807 20:02:17.204301    1172 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0807 20:02:17.204445    1172 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0807 20:02:17.204531    1172 command_runner.go:130] > Device: 8,1	Inode: 2102098     Links: 1
	I0807 20:02:17.204531    1172 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0807 20:02:17.204531    1172 command_runner.go:130] > Access: 2024-08-07 19:37:26.697218980 +0000
	I0807 20:02:17.204615    1172 command_runner.go:130] > Modify: 2024-08-07 19:37:26.697218980 +0000
	I0807 20:02:17.204615    1172 command_runner.go:130] > Change: 2024-08-07 19:37:26.697218980 +0000
	I0807 20:02:17.204615    1172 command_runner.go:130] >  Birth: 2024-08-07 19:37:26.697218980 +0000
	I0807 20:02:17.215873    1172 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0807 20:02:17.224955    1172 command_runner.go:130] > Certificate will not expire
	I0807 20:02:17.237117    1172 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0807 20:02:17.247884    1172 command_runner.go:130] > Certificate will not expire
	I0807 20:02:17.263407    1172 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0807 20:02:17.274406    1172 command_runner.go:130] > Certificate will not expire
	I0807 20:02:17.286816    1172 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0807 20:02:17.297709    1172 command_runner.go:130] > Certificate will not expire
	I0807 20:02:17.312002    1172 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0807 20:02:17.323195    1172 command_runner.go:130] > Certificate will not expire
	I0807 20:02:17.335664    1172 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0807 20:02:17.345329    1172 command_runner.go:130] > Certificate will not expire
	I0807 20:02:17.345753    1172 kubeadm.go:392] StartCluster: {Name:multinode-116700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.3 ClusterName:multinode-116700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.226.95 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.226.55 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.28.226.146 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 20:02:17.355518    1172 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0807 20:02:17.393969    1172 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0807 20:02:17.413939    1172 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0807 20:02:17.413939    1172 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0807 20:02:17.413939    1172 command_runner.go:130] > /var/lib/minikube/etcd:
	I0807 20:02:17.413939    1172 command_runner.go:130] > member
	I0807 20:02:17.413939    1172 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0807 20:02:17.413939    1172 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0807 20:02:17.425718    1172 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0807 20:02:17.445077    1172 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0807 20:02:17.446305    1172 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-116700" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0807 20:02:17.446901    1172 kubeconfig.go:62] C:\Users\jenkins.minikube6\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "multinode-116700" cluster setting kubeconfig missing "multinode-116700" context setting]
	I0807 20:02:17.447841    1172 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 20:02:17.463405    1172 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0807 20:02:17.464071    1172 kapi.go:59] client config for multinode-116700: &rest.Config{Host:"https://172.28.226.95:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-116700/client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-116700/client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData
:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1da64c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0807 20:02:17.465785    1172 cert_rotation.go:137] Starting client certificate rotation controller
	I0807 20:02:17.477135    1172 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0807 20:02:17.495596    1172 command_runner.go:130] > --- /var/tmp/minikube/kubeadm.yaml
	I0807 20:02:17.495693    1172 command_runner.go:130] > +++ /var/tmp/minikube/kubeadm.yaml.new
	I0807 20:02:17.495693    1172 command_runner.go:130] > @@ -1,7 +1,7 @@
	I0807 20:02:17.495693    1172 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0807 20:02:17.495693    1172 command_runner.go:130] >  kind: InitConfiguration
	I0807 20:02:17.495693    1172 command_runner.go:130] >  localAPIEndpoint:
	I0807 20:02:17.495693    1172 command_runner.go:130] > -  advertiseAddress: 172.28.224.86
	I0807 20:02:17.495693    1172 command_runner.go:130] > +  advertiseAddress: 172.28.226.95
	I0807 20:02:17.495693    1172 command_runner.go:130] >    bindPort: 8443
	I0807 20:02:17.495693    1172 command_runner.go:130] >  bootstrapTokens:
	I0807 20:02:17.495693    1172 command_runner.go:130] >    - groups:
	I0807 20:02:17.495693    1172 command_runner.go:130] > @@ -14,13 +14,13 @@
	I0807 20:02:17.495693    1172 command_runner.go:130] >    criSocket: unix:///var/run/cri-dockerd.sock
	I0807 20:02:17.495693    1172 command_runner.go:130] >    name: "multinode-116700"
	I0807 20:02:17.495693    1172 command_runner.go:130] >    kubeletExtraArgs:
	I0807 20:02:17.495693    1172 command_runner.go:130] > -    node-ip: 172.28.224.86
	I0807 20:02:17.495693    1172 command_runner.go:130] > +    node-ip: 172.28.226.95
	I0807 20:02:17.495693    1172 command_runner.go:130] >    taints: []
	I0807 20:02:17.495693    1172 command_runner.go:130] >  ---
	I0807 20:02:17.495693    1172 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0807 20:02:17.495693    1172 command_runner.go:130] >  kind: ClusterConfiguration
	I0807 20:02:17.495693    1172 command_runner.go:130] >  apiServer:
	I0807 20:02:17.495693    1172 command_runner.go:130] > -  certSANs: ["127.0.0.1", "localhost", "172.28.224.86"]
	I0807 20:02:17.495693    1172 command_runner.go:130] > +  certSANs: ["127.0.0.1", "localhost", "172.28.226.95"]
	I0807 20:02:17.495693    1172 command_runner.go:130] >    extraArgs:
	I0807 20:02:17.495693    1172 command_runner.go:130] >      enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	I0807 20:02:17.495693    1172 command_runner.go:130] >  controllerManager:
	I0807 20:02:17.495693    1172 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,7 +1,7 @@
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	-  advertiseAddress: 172.28.224.86
	+  advertiseAddress: 172.28.226.95
	   bindPort: 8443
	 bootstrapTokens:
	   - groups:
	@@ -14,13 +14,13 @@
	   criSocket: unix:///var/run/cri-dockerd.sock
	   name: "multinode-116700"
	   kubeletExtraArgs:
	-    node-ip: 172.28.224.86
	+    node-ip: 172.28.226.95
	   taints: []
	 ---
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	-  certSANs: ["127.0.0.1", "localhost", "172.28.224.86"]
	+  certSANs: ["127.0.0.1", "localhost", "172.28.226.95"]
	   extraArgs:
	     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	
	-- /stdout --
	I0807 20:02:17.495693    1172 kubeadm.go:1160] stopping kube-system containers ...
	I0807 20:02:17.506787    1172 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0807 20:02:17.537224    1172 command_runner.go:130] > 32f103de03d3
	I0807 20:02:17.537224    1172 command_runner.go:130] > b6325ae79a14
	I0807 20:02:17.537224    1172 command_runner.go:130] > d716d608049c
	I0807 20:02:17.537224    1172 command_runner.go:130] > 201691a17a92
	I0807 20:02:17.537224    1172 command_runner.go:130] > ec2579bb9d23
	I0807 20:02:17.537224    1172 command_runner.go:130] > 3b896a77f546
	I0807 20:02:17.537224    1172 command_runner.go:130] > 9fd565bc6207
	I0807 20:02:17.537224    1172 command_runner.go:130] > 0877557fcf51
	I0807 20:02:17.537224    1172 command_runner.go:130] > 1415d4256b4a
	I0807 20:02:17.537224    1172 command_runner.go:130] > c90df84145cb
	I0807 20:02:17.537224    1172 command_runner.go:130] > 1dbaa8c7ed69
	I0807 20:02:17.537224    1172 command_runner.go:130] > c50e3a9ac99f
	I0807 20:02:17.537224    1172 command_runner.go:130] > 548a9e3a6616
	I0807 20:02:17.537224    1172 command_runner.go:130] > 1e5d82deee2f
	I0807 20:02:17.537224    1172 command_runner.go:130] > 92cf9118dac2
	I0807 20:02:17.537224    1172 command_runner.go:130] > 3047b2dc6a14
	I0807 20:02:17.537388    1172 docker.go:483] Stopping containers: [32f103de03d3 b6325ae79a14 d716d608049c 201691a17a92 ec2579bb9d23 3b896a77f546 9fd565bc6207 0877557fcf51 1415d4256b4a c90df84145cb 1dbaa8c7ed69 c50e3a9ac99f 548a9e3a6616 1e5d82deee2f 92cf9118dac2 3047b2dc6a14]
	I0807 20:02:17.546876    1172 ssh_runner.go:195] Run: docker stop 32f103de03d3 b6325ae79a14 d716d608049c 201691a17a92 ec2579bb9d23 3b896a77f546 9fd565bc6207 0877557fcf51 1415d4256b4a c90df84145cb 1dbaa8c7ed69 c50e3a9ac99f 548a9e3a6616 1e5d82deee2f 92cf9118dac2 3047b2dc6a14
	I0807 20:02:17.576218    1172 command_runner.go:130] > 32f103de03d3
	I0807 20:02:17.576218    1172 command_runner.go:130] > b6325ae79a14
	I0807 20:02:17.576218    1172 command_runner.go:130] > d716d608049c
	I0807 20:02:17.576218    1172 command_runner.go:130] > 201691a17a92
	I0807 20:02:17.576310    1172 command_runner.go:130] > ec2579bb9d23
	I0807 20:02:17.576310    1172 command_runner.go:130] > 3b896a77f546
	I0807 20:02:17.576310    1172 command_runner.go:130] > 9fd565bc6207
	I0807 20:02:17.576310    1172 command_runner.go:130] > 0877557fcf51
	I0807 20:02:17.576310    1172 command_runner.go:130] > 1415d4256b4a
	I0807 20:02:17.576310    1172 command_runner.go:130] > c90df84145cb
	I0807 20:02:17.576310    1172 command_runner.go:130] > 1dbaa8c7ed69
	I0807 20:02:17.576310    1172 command_runner.go:130] > c50e3a9ac99f
	I0807 20:02:17.576310    1172 command_runner.go:130] > 548a9e3a6616
	I0807 20:02:17.576310    1172 command_runner.go:130] > 1e5d82deee2f
	I0807 20:02:17.576388    1172 command_runner.go:130] > 92cf9118dac2
	I0807 20:02:17.576388    1172 command_runner.go:130] > 3047b2dc6a14
	I0807 20:02:17.587065    1172 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0807 20:02:17.625386    1172 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0807 20:02:17.647951    1172 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0807 20:02:17.647951    1172 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0807 20:02:17.647951    1172 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0807 20:02:17.647951    1172 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0807 20:02:17.649099    1172 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0807 20:02:17.649099    1172 kubeadm.go:157] found existing configuration files:
	
	I0807 20:02:17.665648    1172 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0807 20:02:17.683748    1172 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0807 20:02:17.684726    1172 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0807 20:02:17.697108    1172 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0807 20:02:17.726660    1172 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0807 20:02:17.743539    1172 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0807 20:02:17.744361    1172 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0807 20:02:17.757232    1172 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0807 20:02:17.791668    1172 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0807 20:02:17.809486    1172 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0807 20:02:17.810301    1172 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0807 20:02:17.822324    1172 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0807 20:02:17.860335    1172 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0807 20:02:17.884084    1172 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0807 20:02:17.884084    1172 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0807 20:02:17.896560    1172 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0807 20:02:17.936336    1172 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0807 20:02:17.965255    1172 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0807 20:02:18.297003    1172 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0807 20:02:18.297003    1172 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0807 20:02:18.297090    1172 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0807 20:02:18.297090    1172 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0807 20:02:18.297090    1172 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0807 20:02:18.297090    1172 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0807 20:02:18.297090    1172 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0807 20:02:18.297090    1172 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0807 20:02:18.297090    1172 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0807 20:02:18.297090    1172 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0807 20:02:18.297090    1172 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0807 20:02:18.297090    1172 command_runner.go:130] > [certs] Using the existing "sa" key
	I0807 20:02:18.297251    1172 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0807 20:02:19.913651    1172 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0807 20:02:19.913801    1172 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0807 20:02:19.913801    1172 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0807 20:02:19.913801    1172 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0807 20:02:19.913801    1172 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0807 20:02:19.913801    1172 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0807 20:02:19.913801    1172 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.6165295s)
	I0807 20:02:19.913801    1172 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0807 20:02:20.249821    1172 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0807 20:02:20.249821    1172 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0807 20:02:20.249821    1172 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0807 20:02:20.249821    1172 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0807 20:02:20.354153    1172 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0807 20:02:20.354213    1172 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0807 20:02:20.354213    1172 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0807 20:02:20.354213    1172 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0807 20:02:20.354282    1172 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0807 20:02:20.459772    1172 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0807 20:02:20.459976    1172 api_server.go:52] waiting for apiserver process to appear ...
	I0807 20:02:20.472643    1172 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0807 20:02:20.981635    1172 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0807 20:02:21.491201    1172 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0807 20:02:21.977215    1172 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0807 20:02:22.484138    1172 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0807 20:02:22.513143    1172 command_runner.go:130] > 1971
	I0807 20:02:22.513143    1172 api_server.go:72] duration metric: took 2.0531407s to wait for apiserver process to appear ...
	I0807 20:02:22.513143    1172 api_server.go:88] waiting for apiserver healthz status ...
	I0807 20:02:22.513143    1172 api_server.go:253] Checking apiserver healthz at https://172.28.226.95:8443/healthz ...
	I0807 20:02:26.453833    1172 api_server.go:279] https://172.28.226.95:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0807 20:02:26.454320    1172 api_server.go:103] status: https://172.28.226.95:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0807 20:02:26.454320    1172 api_server.go:253] Checking apiserver healthz at https://172.28.226.95:8443/healthz ...
	I0807 20:02:26.512422    1172 api_server.go:279] https://172.28.226.95:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0807 20:02:26.512932    1172 api_server.go:103] status: https://172.28.226.95:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0807 20:02:26.519411    1172 api_server.go:253] Checking apiserver healthz at https://172.28.226.95:8443/healthz ...
	I0807 20:02:26.537948    1172 api_server.go:279] https://172.28.226.95:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0807 20:02:26.538511    1172 api_server.go:103] status: https://172.28.226.95:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0807 20:02:27.028033    1172 api_server.go:253] Checking apiserver healthz at https://172.28.226.95:8443/healthz ...
	I0807 20:02:27.035440    1172 api_server.go:279] https://172.28.226.95:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0807 20:02:27.035440    1172 api_server.go:103] status: https://172.28.226.95:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0807 20:02:27.516052    1172 api_server.go:253] Checking apiserver healthz at https://172.28.226.95:8443/healthz ...
	I0807 20:02:27.523779    1172 api_server.go:279] https://172.28.226.95:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0807 20:02:27.523779    1172 api_server.go:103] status: https://172.28.226.95:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0807 20:02:28.025705    1172 api_server.go:253] Checking apiserver healthz at https://172.28.226.95:8443/healthz ...
	I0807 20:02:28.036058    1172 api_server.go:279] https://172.28.226.95:8443/healthz returned 200:
	ok
	I0807 20:02:28.036522    1172 round_trippers.go:463] GET https://172.28.226.95:8443/version
	I0807 20:02:28.036587    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:28.036587    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:28.036666    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:28.047831    1172 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0807 20:02:28.048343    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:28.048343    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:28.048343    1172 round_trippers.go:580]     Content-Length: 263
	I0807 20:02:28.048343    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:28 GMT
	I0807 20:02:28.048343    1172 round_trippers.go:580]     Audit-Id: f3924de8-5cfe-44cd-ab6d-e8bdfbf1b0f7
	I0807 20:02:28.048343    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:28.048343    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:28.048343    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:28.048343    1172 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.3",
	  "gitCommit": "6fc0a69044f1ac4c13841ec4391224a2df241460",
	  "gitTreeState": "clean",
	  "buildDate": "2024-07-16T23:48:12Z",
	  "goVersion": "go1.22.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0807 20:02:28.048343    1172 api_server.go:141] control plane version: v1.30.3
	I0807 20:02:28.048343    1172 api_server.go:131] duration metric: took 5.5351293s to wait for apiserver health ...
	I0807 20:02:28.048343    1172 cni.go:84] Creating CNI manager for ""
	I0807 20:02:28.048343    1172 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0807 20:02:28.052357    1172 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0807 20:02:28.073555    1172 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0807 20:02:28.085709    1172 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0807 20:02:28.085768    1172 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I0807 20:02:28.085768    1172 command_runner.go:130] > Device: 0,17	Inode: 3500        Links: 1
	I0807 20:02:28.085768    1172 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0807 20:02:28.085768    1172 command_runner.go:130] > Access: 2024-08-07 20:00:47.586820200 +0000
	I0807 20:02:28.085768    1172 command_runner.go:130] > Modify: 2024-07-29 16:10:03.000000000 +0000
	I0807 20:02:28.085768    1172 command_runner.go:130] > Change: 2024-08-07 20:00:36.290000000 +0000
	I0807 20:02:28.085768    1172 command_runner.go:130] >  Birth: -
	I0807 20:02:28.085921    1172 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0807 20:02:28.085952    1172 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0807 20:02:28.141236    1172 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0807 20:02:29.525465    1172 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0807 20:02:29.525584    1172 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0807 20:02:29.525584    1172 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0807 20:02:29.525657    1172 command_runner.go:130] > daemonset.apps/kindnet configured
	I0807 20:02:29.525657    1172 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.3844034s)
	I0807 20:02:29.525741    1172 system_pods.go:43] waiting for kube-system pods to appear ...
	I0807 20:02:29.525976    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods
	I0807 20:02:29.526049    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:29.526049    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:29.526049    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:29.531854    1172 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 20:02:29.532212    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:29.532375    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:29.532418    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:29 GMT
	I0807 20:02:29.532699    1172 round_trippers.go:580]     Audit-Id: f8e79090-e8b1-412f-95d2-f43a2412224c
	I0807 20:02:29.532728    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:29.532728    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:29.532728    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:29.534124    1172 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1945"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-7l6v2","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7de73f9c-93d9-46c6-ae10-b253dd257a19","resourceVersion":"1920","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"26f3775d-afda-4408-b967-dc333bdd23fc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26f3775d-afda-4408-b967-dc333bdd23fc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 85415 chars]
	I0807 20:02:29.539969    1172 system_pods.go:59] 12 kube-system pods found
	I0807 20:02:29.539969    1172 system_pods.go:61] "coredns-7db6d8ff4d-7l6v2" [7de73f9c-93d9-46c6-ae10-b253dd257a19] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0807 20:02:29.539969    1172 system_pods.go:61] "etcd-multinode-116700" [822f1e63-7c8a-4172-927c-32f4e0b5d505] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0807 20:02:29.539969    1172 system_pods.go:61] "kindnet-gk542" [bad4e2c3-505e-4175-9a5b-186a1874ff8d] Running
	I0807 20:02:29.539969    1172 system_pods.go:61] "kindnet-gsjlq" [7dac93b0-0cfa-4d64-a437-ce92de8bf57d] Running
	I0807 20:02:29.539969    1172 system_pods.go:61] "kindnet-kltmx" [b2ddfdd4-b957-45e3-b967-cf2650e86069] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0807 20:02:29.539969    1172 system_pods.go:61] "kube-apiserver-multinode-116700" [5111ea6a-eb9d-4e60-bbc5-698a5882a60a] Pending
	I0807 20:02:29.539969    1172 system_pods.go:61] "kube-controller-manager-multinode-116700" [4d2e8250-9b12-4277-8834-515c1621fc78] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0807 20:02:29.539969    1172 system_pods.go:61] "kube-proxy-4lnjd" [254c1a93-f57b-4997-a3a1-d5f145f7c549] Running
	I0807 20:02:29.539969    1172 system_pods.go:61] "kube-proxy-fmjt9" [766df91e-8fd0-457b-8c11-8810059ca4d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0807 20:02:29.539969    1172 system_pods.go:61] "kube-proxy-vcb7n" [d8d87ad6-19cc-45fa-8c9f-1a862fec4e59] Running
	I0807 20:02:29.540991    1172 system_pods.go:61] "kube-scheduler-multinode-116700" [7b6df7b7-8c94-498a-bc4c-74d72efd572a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0807 20:02:29.540991    1172 system_pods.go:61] "storage-provisioner" [8a8036f6-f1a0-4fca-b8dd-ed99c3535b47] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0807 20:02:29.540991    1172 system_pods.go:74] duration metric: took 15.2504ms to wait for pod list to return data ...
	I0807 20:02:29.540991    1172 node_conditions.go:102] verifying NodePressure condition ...
	I0807 20:02:29.540991    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes
	I0807 20:02:29.540991    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:29.540991    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:29.540991    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:29.544983    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:02:29.544983    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:29.544983    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:29.544983    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:29.544983    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:29.544983    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:29 GMT
	I0807 20:02:29.544983    1172 round_trippers.go:580]     Audit-Id: 463c2353-57a7-42be-b54e-0c1b0dc0e14a
	I0807 20:02:29.544983    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:29.544983    1172 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1945"},"items":[{"metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"1871","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15629 chars]
	I0807 20:02:29.546973    1172 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0807 20:02:29.546973    1172 node_conditions.go:123] node cpu capacity is 2
	I0807 20:02:29.546973    1172 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0807 20:02:29.546973    1172 node_conditions.go:123] node cpu capacity is 2
	I0807 20:02:29.546973    1172 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0807 20:02:29.546973    1172 node_conditions.go:123] node cpu capacity is 2
	I0807 20:02:29.546973    1172 node_conditions.go:105] duration metric: took 5.9813ms to run NodePressure ...
	I0807 20:02:29.546973    1172 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0807 20:02:29.792569    1172 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0807 20:02:30.020575    1172 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0807 20:02:30.022756    1172 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0807 20:02:30.022829    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0807 20:02:30.022829    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:30.022829    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:30.022829    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:30.029447    1172 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0807 20:02:30.029447    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:30.029447    1172 round_trippers.go:580]     Audit-Id: b4593179-8dd9-45f7-bd23-c9691a471adc
	I0807 20:02:30.029447    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:30.030001    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:30.030001    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:30.030001    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:30.030001    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:30 GMT
	I0807 20:02:30.030745    1172 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1953"},"items":[{"metadata":{"name":"etcd-multinode-116700","namespace":"kube-system","uid":"822f1e63-7c8a-4172-927c-32f4e0b5d505","resourceVersion":"1915","creationTimestamp":"2024-08-07T20:02:27Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.28.226.95:2379","kubernetes.io/config.hash":"9eecaca34ea754a7954ea8f568cb96d3","kubernetes.io/config.mirror":"9eecaca34ea754a7954ea8f568cb96d3","kubernetes.io/config.seen":"2024-08-07T20:02:20.493455845Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-07T20:02:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotatio
ns":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f: [truncated 30532 chars]
	I0807 20:02:30.032625    1172 kubeadm.go:739] kubelet initialised
	I0807 20:02:30.032682    1172 kubeadm.go:740] duration metric: took 9.9261ms waiting for restarted kubelet to initialise ...
	I0807 20:02:30.032682    1172 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0807 20:02:30.032880    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods
	I0807 20:02:30.032963    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:30.033003    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:30.033003    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:30.047837    1172 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0807 20:02:30.048841    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:30.048868    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:30.048868    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:30.048868    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:30.048868    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:30 GMT
	I0807 20:02:30.048868    1172 round_trippers.go:580]     Audit-Id: 24d91f64-0521-41a7-8e58-3bea71e46190
	I0807 20:02:30.048868    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:30.050835    1172 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1953"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-7l6v2","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7de73f9c-93d9-46c6-ae10-b253dd257a19","resourceVersion":"1920","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"26f3775d-afda-4408-b967-dc333bdd23fc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26f3775d-afda-4408-b967-dc333bdd23fc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 87137 chars]
	I0807 20:02:30.055832    1172 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-7l6v2" in "kube-system" namespace to be "Ready" ...
	I0807 20:02:30.055832    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7l6v2
	I0807 20:02:30.055832    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:30.055832    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:30.055832    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:30.059883    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:02:30.059983    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:30.059983    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:30.059983    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:30.059983    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:30.059983    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:30 GMT
	I0807 20:02:30.060048    1172 round_trippers.go:580]     Audit-Id: 77748950-3c22-45e9-9b70-55051db4480c
	I0807 20:02:30.060048    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:30.060241    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7l6v2","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7de73f9c-93d9-46c6-ae10-b253dd257a19","resourceVersion":"1920","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"26f3775d-afda-4408-b967-dc333bdd23fc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26f3775d-afda-4408-b967-dc333bdd23fc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0807 20:02:30.060777    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:30.060777    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:30.060777    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:30.060777    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:30.064145    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:02:30.064317    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:30.064317    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:30.064389    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:30 GMT
	I0807 20:02:30.064389    1172 round_trippers.go:580]     Audit-Id: 4e324599-44e8-4143-9aff-efb19274d3d0
	I0807 20:02:30.064389    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:30.064389    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:30.064389    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:30.064745    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"1871","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0807 20:02:30.065236    1172 pod_ready.go:97] node "multinode-116700" hosting pod "coredns-7db6d8ff4d-7l6v2" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-116700" has status "Ready":"False"
	I0807 20:02:30.065311    1172 pod_ready.go:81] duration metric: took 9.4787ms for pod "coredns-7db6d8ff4d-7l6v2" in "kube-system" namespace to be "Ready" ...
	E0807 20:02:30.065311    1172 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-116700" hosting pod "coredns-7db6d8ff4d-7l6v2" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-116700" has status "Ready":"False"
	I0807 20:02:30.065311    1172 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-116700" in "kube-system" namespace to be "Ready" ...
	I0807 20:02:30.065445    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-116700
	I0807 20:02:30.065445    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:30.065445    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:30.065445    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:30.067836    1172 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 20:02:30.067836    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:30.067836    1172 round_trippers.go:580]     Audit-Id: 61dc34d8-5edc-4753-8e4d-44cf0f3cc0a9
	I0807 20:02:30.067836    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:30.067836    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:30.067836    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:30.067836    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:30.068497    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:30 GMT
	I0807 20:02:30.068762    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-116700","namespace":"kube-system","uid":"822f1e63-7c8a-4172-927c-32f4e0b5d505","resourceVersion":"1915","creationTimestamp":"2024-08-07T20:02:27Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.28.226.95:2379","kubernetes.io/config.hash":"9eecaca34ea754a7954ea8f568cb96d3","kubernetes.io/config.mirror":"9eecaca34ea754a7954ea8f568cb96d3","kubernetes.io/config.seen":"2024-08-07T20:02:20.493455845Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-07T20:02:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6384 chars]
	I0807 20:02:30.069019    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:30.069019    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:30.069019    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:30.069019    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:30.072653    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:02:30.072653    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:30.072653    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:30.072653    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:30.072653    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:30.072653    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:30.072653    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:30 GMT
	I0807 20:02:30.072653    1172 round_trippers.go:580]     Audit-Id: cbaf0484-bed7-47a3-9145-2d34c6335afd
	I0807 20:02:30.072653    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"1871","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0807 20:02:30.072653    1172 pod_ready.go:97] node "multinode-116700" hosting pod "etcd-multinode-116700" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-116700" has status "Ready":"False"
	I0807 20:02:30.072653    1172 pod_ready.go:81] duration metric: took 7.3415ms for pod "etcd-multinode-116700" in "kube-system" namespace to be "Ready" ...
	E0807 20:02:30.072653    1172 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-116700" hosting pod "etcd-multinode-116700" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-116700" has status "Ready":"False"
	I0807 20:02:30.072653    1172 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-116700" in "kube-system" namespace to be "Ready" ...
	I0807 20:02:30.072653    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-116700
	I0807 20:02:30.072653    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:30.072653    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:30.072653    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:30.076647    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:02:30.076647    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:30.076647    1172 round_trippers.go:580]     Audit-Id: 4caaeb1b-7e00-4d7e-be2a-b0f5a9c93bf9
	I0807 20:02:30.076647    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:30.076647    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:30.076647    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:30.076647    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:30.076936    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:30 GMT
	I0807 20:02:30.076991    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-116700","namespace":"kube-system","uid":"5111ea6a-eb9d-4e60-bbc5-698a5882a60a","resourceVersion":"1949","creationTimestamp":"2024-08-07T20:02:28Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.28.226.95:8443","kubernetes.io/config.hash":"8066c637edc34431d2657878d0b69f79","kubernetes.io/config.mirror":"8066c637edc34431d2657878d0b69f79","kubernetes.io/config.seen":"2024-08-07T20:02:20.432683231Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-07T20:02:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7939 chars]
	I0807 20:02:30.077618    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:30.077691    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:30.077691    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:30.077691    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:30.084374    1172 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0807 20:02:30.084374    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:30.084374    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:30.084374    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:30.084374    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:30 GMT
	I0807 20:02:30.084374    1172 round_trippers.go:580]     Audit-Id: d5e3dd00-4fa0-449e-b6ea-b58355d25614
	I0807 20:02:30.084374    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:30.084374    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:30.085525    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"1871","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0807 20:02:30.085684    1172 pod_ready.go:97] node "multinode-116700" hosting pod "kube-apiserver-multinode-116700" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-116700" has status "Ready":"False"
	I0807 20:02:30.085684    1172 pod_ready.go:81] duration metric: took 13.0317ms for pod "kube-apiserver-multinode-116700" in "kube-system" namespace to be "Ready" ...
	E0807 20:02:30.085684    1172 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-116700" hosting pod "kube-apiserver-multinode-116700" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-116700" has status "Ready":"False"
	I0807 20:02:30.085684    1172 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-116700" in "kube-system" namespace to be "Ready" ...
	I0807 20:02:30.085684    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-116700
	I0807 20:02:30.085684    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:30.085684    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:30.085684    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:30.091265    1172 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 20:02:30.091265    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:30.091265    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:30.091265    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:30 GMT
	I0807 20:02:30.091265    1172 round_trippers.go:580]     Audit-Id: bbe13a2a-6226-427a-aef5-bc92cc438508
	I0807 20:02:30.091265    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:30.091265    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:30.091265    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:30.091265    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-116700","namespace":"kube-system","uid":"4d2e8250-9b12-4277-8834-515c1621fc78","resourceVersion":"1912","creationTimestamp":"2024-08-07T19:37:39Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ef62d358a9b469de2443e4a4f620921d","kubernetes.io/config.mirror":"ef62d358a9b469de2443e4a4f620921d","kubernetes.io/config.seen":"2024-08-07T19:37:39.552053960Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7732 chars]
	I0807 20:02:30.092248    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:30.092248    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:30.092248    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:30.092248    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:30.094292    1172 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 20:02:30.094292    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:30.094292    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:30.094292    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:30.094292    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:30.094292    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:30 GMT
	I0807 20:02:30.094292    1172 round_trippers.go:580]     Audit-Id: ffe0dec2-0e97-4714-bc8b-d1e91c5ce4ab
	I0807 20:02:30.094292    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:30.094292    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"1871","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0807 20:02:30.095265    1172 pod_ready.go:97] node "multinode-116700" hosting pod "kube-controller-manager-multinode-116700" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-116700" has status "Ready":"False"
	I0807 20:02:30.095265    1172 pod_ready.go:81] duration metric: took 9.5808ms for pod "kube-controller-manager-multinode-116700" in "kube-system" namespace to be "Ready" ...
	E0807 20:02:30.095265    1172 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-116700" hosting pod "kube-controller-manager-multinode-116700" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-116700" has status "Ready":"False"
	I0807 20:02:30.095265    1172 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-4lnjd" in "kube-system" namespace to be "Ready" ...
	I0807 20:02:30.229064    1172 request.go:629] Waited for 133.7967ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4lnjd
	I0807 20:02:30.229370    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4lnjd
	I0807 20:02:30.229370    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:30.229370    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:30.229370    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:30.232974    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:02:30.233124    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:30.233247    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:30.233247    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:30.233247    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:30 GMT
	I0807 20:02:30.233247    1172 round_trippers.go:580]     Audit-Id: 2647c25e-d1c6-4c7d-8856-c441cccd69ac
	I0807 20:02:30.233247    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:30.233247    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:30.233761    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-4lnjd","generateName":"kube-proxy-","namespace":"kube-system","uid":"254c1a93-f57b-4997-a3a1-d5f145f7c549","resourceVersion":"1843","creationTimestamp":"2024-08-07T19:46:10Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d82856c0-3330-4ab9-b7bf-54ed48646bce","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:46:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d82856c0-3330-4ab9-b7bf-54ed48646bce\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6067 chars]
	I0807 20:02:30.433980    1172 request.go:629] Waited for 199.2358ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.226.95:8443/api/v1/nodes/multinode-116700-m03
	I0807 20:02:30.434357    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700-m03
	I0807 20:02:30.434357    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:30.434357    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:30.434357    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:30.437745    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:02:30.437745    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:30.437745    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:30.437745    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:30.437745    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:30 GMT
	I0807 20:02:30.437745    1172 round_trippers.go:580]     Audit-Id: 7e8fd42c-f914-4725-b638-2ea5319862ca
	I0807 20:02:30.437745    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:30.437745    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:30.437745    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700-m03","uid":"9ade310d-2eba-4d92-8b38-64ccda5e080c","resourceVersion":"1854","creationTimestamp":"2024-08-07T19:57:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_07T19_57_34_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:57:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4400 chars]
	I0807 20:02:30.438739    1172 pod_ready.go:97] node "multinode-116700-m03" hosting pod "kube-proxy-4lnjd" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-116700-m03" has status "Ready":"Unknown"
	I0807 20:02:30.438739    1172 pod_ready.go:81] duration metric: took 343.4688ms for pod "kube-proxy-4lnjd" in "kube-system" namespace to be "Ready" ...
	E0807 20:02:30.438739    1172 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-116700-m03" hosting pod "kube-proxy-4lnjd" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-116700-m03" has status "Ready":"Unknown"
	I0807 20:02:30.438739    1172 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-fmjt9" in "kube-system" namespace to be "Ready" ...
	I0807 20:02:30.625867    1172 request.go:629] Waited for 187.1258ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fmjt9
	I0807 20:02:30.625998    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fmjt9
	I0807 20:02:30.626163    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:30.626268    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:30.626291    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:30.629748    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:02:30.629748    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:30.629748    1172 round_trippers.go:580]     Audit-Id: 55cd0960-dceb-4483-a2c9-640e04f8c0e2
	I0807 20:02:30.629748    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:30.629748    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:30.629748    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:30.629748    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:30.629748    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:30 GMT
	I0807 20:02:30.629748    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-fmjt9","generateName":"kube-proxy-","namespace":"kube-system","uid":"766df91e-8fd0-457b-8c11-8810059ca4d9","resourceVersion":"1952","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d82856c0-3330-4ab9-b7bf-54ed48646bce","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d82856c0-3330-4ab9-b7bf-54ed48646bce\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6034 chars]
	I0807 20:02:30.836931    1172 request.go:629] Waited for 205.8516ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:30.837221    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:30.837221    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:30.837221    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:30.837221    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:30.840960    1172 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 20:02:30.840983    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:30.840983    1172 round_trippers.go:580]     Audit-Id: 2cf5d906-e99b-4d25-861e-41164e4ce77f
	I0807 20:02:30.840983    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:30.840983    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:30.840983    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:30.840983    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:30.840983    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:30 GMT
	I0807 20:02:30.841154    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"1871","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0807 20:02:30.841726    1172 pod_ready.go:97] node "multinode-116700" hosting pod "kube-proxy-fmjt9" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-116700" has status "Ready":"False"
	I0807 20:02:30.841829    1172 pod_ready.go:81] duration metric: took 403.0851ms for pod "kube-proxy-fmjt9" in "kube-system" namespace to be "Ready" ...
	E0807 20:02:30.841829    1172 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-116700" hosting pod "kube-proxy-fmjt9" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-116700" has status "Ready":"False"
	I0807 20:02:30.841829    1172 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-vcb7n" in "kube-system" namespace to be "Ready" ...
	I0807 20:02:31.023663    1172 request.go:629] Waited for 181.5326ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vcb7n
	I0807 20:02:31.023743    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vcb7n
	I0807 20:02:31.023743    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:31.023743    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:31.023743    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:31.027553    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:02:31.027553    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:31.027553    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:31.027553    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:31.027553    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:31 GMT
	I0807 20:02:31.027553    1172 round_trippers.go:580]     Audit-Id: 2a267fea-3fd9-4a2b-a7a7-306a0837c4a3
	I0807 20:02:31.027553    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:31.027850    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:31.028226    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-vcb7n","generateName":"kube-proxy-","namespace":"kube-system","uid":"d8d87ad6-19cc-45fa-8c9f-1a862fec4e59","resourceVersion":"661","creationTimestamp":"2024-08-07T19:41:07Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d82856c0-3330-4ab9-b7bf-54ed48646bce","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:41:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d82856c0-3330-4ab9-b7bf-54ed48646bce\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5836 chars]
	I0807 20:02:31.226467    1172 request.go:629] Waited for 197.1667ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.226.95:8443/api/v1/nodes/multinode-116700-m02
	I0807 20:02:31.226467    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700-m02
	I0807 20:02:31.226693    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:31.226693    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:31.226693    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:31.229258    1172 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 20:02:31.229258    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:31.229258    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:31.229258    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:31.230002    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:31.230002    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:31.230002    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:31 GMT
	I0807 20:02:31.230002    1172 round_trippers.go:580]     Audit-Id: 73f837a0-697b-4d56-9d5f-01b5f9b9522c
	I0807 20:02:31.230270    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700-m02","uid":"95fa38ae-e99d-47d4-a12c-06eb42d66eb3","resourceVersion":"1754","creationTimestamp":"2024-08-07T19:41:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_07T19_41_08_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:41:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3826 chars]
	I0807 20:02:31.230667    1172 pod_ready.go:92] pod "kube-proxy-vcb7n" in "kube-system" namespace has status "Ready":"True"
	I0807 20:02:31.230667    1172 pod_ready.go:81] duration metric: took 388.833ms for pod "kube-proxy-vcb7n" in "kube-system" namespace to be "Ready" ...
	I0807 20:02:31.230667    1172 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-116700" in "kube-system" namespace to be "Ready" ...
	I0807 20:02:31.428462    1172 request.go:629] Waited for 197.5381ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-116700
	I0807 20:02:31.428704    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-116700
	I0807 20:02:31.428704    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:31.428704    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:31.428704    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:31.432059    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:02:31.432059    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:31.433086    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:31 GMT
	I0807 20:02:31.433086    1172 round_trippers.go:580]     Audit-Id: b0fb5448-79b5-4980-b9f2-51c658e99485
	I0807 20:02:31.433133    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:31.433133    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:31.433133    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:31.433133    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:31.433263    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-116700","namespace":"kube-system","uid":"7b6df7b7-8c94-498a-bc4c-74d72efd572a","resourceVersion":"1913","creationTimestamp":"2024-08-07T19:37:39Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"fde91c95fce6faff219ccfa4b0b2484c","kubernetes.io/config.mirror":"fde91c95fce6faff219ccfa4b0b2484c","kubernetes.io/config.seen":"2024-08-07T19:37:39.552047359Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5444 chars]
	I0807 20:02:31.629942    1172 request.go:629] Waited for 195.8068ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:31.630110    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:31.630110    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:31.630110    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:31.630110    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:31.632681    1172 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 20:02:31.632681    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:31.632681    1172 round_trippers.go:580]     Audit-Id: 2e87a254-c2c0-49ca-89f2-87026a94e4b0
	I0807 20:02:31.632681    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:31.632681    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:31.633331    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:31.633331    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:31.633331    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:31 GMT
	I0807 20:02:31.633898    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"1871","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0807 20:02:31.634527    1172 pod_ready.go:97] node "multinode-116700" hosting pod "kube-scheduler-multinode-116700" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-116700" has status "Ready":"False"
	I0807 20:02:31.634604    1172 pod_ready.go:81] duration metric: took 403.9318ms for pod "kube-scheduler-multinode-116700" in "kube-system" namespace to be "Ready" ...
	E0807 20:02:31.634604    1172 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-116700" hosting pod "kube-scheduler-multinode-116700" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-116700" has status "Ready":"False"
	I0807 20:02:31.634604    1172 pod_ready.go:38] duration metric: took 1.6018476s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0807 20:02:31.634679    1172 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0807 20:02:31.655839    1172 command_runner.go:130] > -16
	I0807 20:02:31.656051    1172 ops.go:34] apiserver oom_adj: -16
	I0807 20:02:31.656051    1172 kubeadm.go:597] duration metric: took 14.2419302s to restartPrimaryControlPlane
	I0807 20:02:31.656131    1172 kubeadm.go:394] duration metric: took 14.3101778s to StartCluster
	I0807 20:02:31.656131    1172 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 20:02:31.656131    1172 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0807 20:02:31.658045    1172 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 20:02:31.659688    1172 start.go:235] Will wait 6m0s for node &{Name: IP:172.28.226.95 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0807 20:02:31.659741    1172 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0807 20:02:31.660168    1172 config.go:182] Loaded profile config "multinode-116700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 20:02:31.663131    1172 out.go:177] * Verifying Kubernetes components...
	I0807 20:02:31.669054    1172 out.go:177] * Enabled addons: 
	I0807 20:02:31.671513    1172 addons.go:510] duration metric: took 11.8246ms for enable addons: enabled=[]
	I0807 20:02:31.677543    1172 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 20:02:31.946334    1172 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0807 20:02:31.972467    1172 node_ready.go:35] waiting up to 6m0s for node "multinode-116700" to be "Ready" ...
	I0807 20:02:31.973468    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:31.973468    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:31.973468    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:31.973468    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:31.977446    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:02:31.977446    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:31.977446    1172 round_trippers.go:580]     Audit-Id: 28062720-7444-428d-bda9-fa6ff9fc87c4
	I0807 20:02:31.977446    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:31.977446    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:31.977446    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:31.977446    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:31.977446    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:31 GMT
	I0807 20:02:31.978694    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"1871","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0807 20:02:32.475181    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:32.475181    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:32.475181    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:32.475181    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:32.478767    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:02:32.478887    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:32.478887    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:32 GMT
	I0807 20:02:32.478887    1172 round_trippers.go:580]     Audit-Id: 7f873afe-44c7-498b-a4b3-24497c382afb
	I0807 20:02:32.478887    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:32.478887    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:32.478887    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:32.478887    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:32.480186    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"1871","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0807 20:02:32.987293    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:32.987293    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:32.987293    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:32.987293    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:32.991054    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:02:32.991356    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:32.991356    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:32.991432    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:33 GMT
	I0807 20:02:32.991432    1172 round_trippers.go:580]     Audit-Id: b79303ae-051c-4d12-bbc0-a9c3f9e0c9d3
	I0807 20:02:32.991432    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:32.991432    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:32.991432    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:32.991432    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"1871","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0807 20:02:33.473992    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:33.474301    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:33.474301    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:33.474414    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:33.481135    1172 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0807 20:02:33.481346    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:33.481346    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:33.481346    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:33 GMT
	I0807 20:02:33.481346    1172 round_trippers.go:580]     Audit-Id: f4214583-7382-43cf-842c-cdde3e9855b6
	I0807 20:02:33.481346    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:33.481346    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:33.481346    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:33.481346    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"1871","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0807 20:02:33.986723    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:33.986723    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:33.986723    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:33.986723    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:33.990363    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:02:33.991196    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:33.991196    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:33.991196    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:33.991196    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:33.991196    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:34 GMT
	I0807 20:02:33.991334    1172 round_trippers.go:580]     Audit-Id: 89a3befa-fbb3-498c-8fb1-972d492df91d
	I0807 20:02:33.991334    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:33.991374    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"1871","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0807 20:02:33.992336    1172 node_ready.go:53] node "multinode-116700" has status "Ready":"False"
	I0807 20:02:34.483881    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:34.483881    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:34.484196    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:34.484196    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:34.488755    1172 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 20:02:34.488755    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:34.488755    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:34.488755    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:34 GMT
	I0807 20:02:34.488755    1172 round_trippers.go:580]     Audit-Id: 3fce2f23-72a5-4813-80a6-252f9d60e6e6
	I0807 20:02:34.488755    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:34.488755    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:34.488755    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:34.488755    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"1871","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0807 20:02:34.982625    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:34.982625    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:34.982625    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:34.982625    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:34.987210    1172 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 20:02:34.987210    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:34.987441    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:35 GMT
	I0807 20:02:34.987441    1172 round_trippers.go:580]     Audit-Id: 7ceaae53-23ce-4143-bc21-5a9eac82a57d
	I0807 20:02:34.987441    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:34.987441    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:34.987441    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:34.987441    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:34.987617    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"1871","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0807 20:02:35.479925    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:35.479925    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:35.479925    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:35.479925    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:35.483462    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:02:35.483462    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:35.483898    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:35 GMT
	I0807 20:02:35.483898    1172 round_trippers.go:580]     Audit-Id: 7503987e-d476-43d5-b0f9-2f7fca1bd815
	I0807 20:02:35.483898    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:35.483898    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:35.483898    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:35.483898    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:35.484004    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"1871","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0807 20:02:35.983558    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:35.983558    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:35.983558    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:35.983558    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:35.989954    1172 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0807 20:02:35.989954    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:35.990019    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:35.990019    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:35.990019    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:35.990019    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:36 GMT
	I0807 20:02:35.990117    1172 round_trippers.go:580]     Audit-Id: 13a683f6-52a5-486c-a1ff-89455457ff43
	I0807 20:02:35.990175    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:35.991199    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"1871","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0807 20:02:36.483247    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:36.483247    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:36.483247    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:36.483335    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:36.487504    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:02:36.487504    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:36.487504    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:36.487563    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:36.487563    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:36.487563    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:36 GMT
	I0807 20:02:36.487563    1172 round_trippers.go:580]     Audit-Id: 891433bb-4097-4d45-984c-3e7ade9e18c6
	I0807 20:02:36.487617    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:36.487617    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"1871","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0807 20:02:36.488682    1172 node_ready.go:53] node "multinode-116700" has status "Ready":"False"
	I0807 20:02:36.980123    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:36.980226    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:36.980226    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:36.980226    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:36.987944    1172 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0807 20:02:36.987944    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:36.987944    1172 round_trippers.go:580]     Audit-Id: 16f70f5d-5d19-47a3-b9d0-213fc2d451ae
	I0807 20:02:36.987944    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:36.987944    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:36.987944    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:36.987944    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:36.987944    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:37 GMT
	I0807 20:02:36.987944    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"1871","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0807 20:02:37.481170    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:37.481333    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:37.481333    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:37.481333    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:37.484261    1172 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 20:02:37.484261    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:37.484261    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:37.485180    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:37.485180    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:37.485180    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:37 GMT
	I0807 20:02:37.485180    1172 round_trippers.go:580]     Audit-Id: bf988547-6486-49a8-892b-0ff96bd013d3
	I0807 20:02:37.485180    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:37.486301    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"1871","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0807 20:02:37.980048    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:37.980126    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:37.980126    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:37.980126    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:37.984487    1172 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 20:02:37.984487    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:37.984487    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:37.984487    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:37.984914    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:37.984914    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:37.984914    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:38 GMT
	I0807 20:02:37.984914    1172 round_trippers.go:580]     Audit-Id: f99f0558-7749-42fb-b424-cfa921f0c8a9
	I0807 20:02:37.985427    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"1871","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0807 20:02:38.479956    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:38.480090    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:38.480173    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:38.480173    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:38.484247    1172 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 20:02:38.484247    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:38.484247    1172 round_trippers.go:580]     Audit-Id: 5472898b-f4d7-4327-b46b-9d7a2000afa9
	I0807 20:02:38.484247    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:38.484247    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:38.484247    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:38.484247    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:38.484761    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:38 GMT
	I0807 20:02:38.485635    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"1871","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0807 20:02:38.982403    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:38.982592    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:38.982592    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:38.982592    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:38.985173    1172 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 20:02:38.985173    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:38.985173    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:38.985173    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:38.985173    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:39 GMT
	I0807 20:02:38.985173    1172 round_trippers.go:580]     Audit-Id: fb19d351-def8-4097-8bfd-21334d986bfa
	I0807 20:02:38.985173    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:38.985173    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:38.985173    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"1871","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0807 20:02:38.986204    1172 node_ready.go:53] node "multinode-116700" has status "Ready":"False"
	I0807 20:02:39.488094    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:39.488094    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:39.488094    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:39.488094    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:39.492479    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:02:39.492565    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:39.492610    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:39.492610    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:39.492610    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:39 GMT
	I0807 20:02:39.492674    1172 round_trippers.go:580]     Audit-Id: 4c0406b4-1053-473e-924f-2a77a8d4d0a8
	I0807 20:02:39.492760    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:39.492760    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:39.493039    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"1987","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0807 20:02:39.974794    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:39.974794    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:39.974794    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:39.974794    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:39.978467    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:02:39.978467    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:39.978813    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:39.978813    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:39.978813    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:39.978813    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:39 GMT
	I0807 20:02:39.978813    1172 round_trippers.go:580]     Audit-Id: 9ae0e219-f98a-4cad-9120-a1c129abb84c
	I0807 20:02:39.978813    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:39.978995    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"1987","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0807 20:02:40.488046    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:40.488107    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:40.488107    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:40.488107    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:40.492007    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:02:40.492435    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:40.492435    1172 round_trippers.go:580]     Audit-Id: 2fea58af-6cbe-4034-9f2c-bf3657b6ed62
	I0807 20:02:40.492435    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:40.492435    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:40.492502    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:40.492502    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:40.492502    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:40 GMT
	I0807 20:02:40.493079    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"1987","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0807 20:02:40.974436    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:40.974548    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:40.974548    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:40.974548    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:40.978109    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:02:40.978109    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:40.979033    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:40 GMT
	I0807 20:02:40.979033    1172 round_trippers.go:580]     Audit-Id: 5ff32c08-4d12-4259-b5f6-eacdbb09df4f
	I0807 20:02:40.979033    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:40.979033    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:40.979033    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:40.979033    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:40.979255    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"1987","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0807 20:02:41.485676    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:41.485676    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:41.485676    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:41.485676    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:41.490338    1172 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 20:02:41.490338    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:41.490695    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:41.490695    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:41.490695    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:41.490695    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:41.490695    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:41 GMT
	I0807 20:02:41.490695    1172 round_trippers.go:580]     Audit-Id: c55403ef-1096-430b-8552-802e6b1358c5
	I0807 20:02:41.490942    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"1987","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0807 20:02:41.491595    1172 node_ready.go:53] node "multinode-116700" has status "Ready":"False"
	I0807 20:02:41.985949    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:41.986015    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:41.986015    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:41.986015    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:41.990436    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:02:41.990436    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:41.990497    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:41.990497    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:41.990497    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:41.990497    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:42 GMT
	I0807 20:02:41.990497    1172 round_trippers.go:580]     Audit-Id: 3fb2d2e1-9010-4c21-a815-776d52b7733c
	I0807 20:02:41.990497    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:41.991410    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"1987","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0807 20:02:42.485798    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:42.485798    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:42.485881    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:42.485881    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:42.488828    1172 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 20:02:42.488828    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:42.488828    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:42.488828    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:42.488828    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:42.488828    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:42 GMT
	I0807 20:02:42.488828    1172 round_trippers.go:580]     Audit-Id: baefad3f-0707-45ae-b242-bd8bb38aa43d
	I0807 20:02:42.488828    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:42.489484    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"1987","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0807 20:02:42.988215    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:42.988215    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:42.988215    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:42.988215    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:42.991834    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:02:42.991834    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:42.991834    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:43 GMT
	I0807 20:02:42.991834    1172 round_trippers.go:580]     Audit-Id: fb7b93fe-f678-49bf-9186-f6df7963b507
	I0807 20:02:42.991834    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:42.991834    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:42.991834    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:42.991834    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:42.992809    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"1987","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0807 20:02:43.472970    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:43.473296    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:43.473296    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:43.473296    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:43.477089    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:02:43.478087    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:43.478087    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:43.478087    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:43.478087    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:43 GMT
	I0807 20:02:43.478087    1172 round_trippers.go:580]     Audit-Id: 9137dcfe-94d1-4fdc-a7dc-e8f213e03b50
	I0807 20:02:43.478087    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:43.478087    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:43.478852    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"1987","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0807 20:02:43.987538    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:43.987619    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:43.987619    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:43.987619    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:43.990901    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:02:43.991828    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:43.991828    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:43.991828    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:44 GMT
	I0807 20:02:43.991828    1172 round_trippers.go:580]     Audit-Id: bb94eb97-33bb-4dfa-a2c1-016dbab3219f
	I0807 20:02:43.991828    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:43.991828    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:43.991828    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:43.992081    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"1987","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0807 20:02:43.992449    1172 node_ready.go:53] node "multinode-116700" has status "Ready":"False"
	I0807 20:02:44.486473    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:44.486473    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:44.486473    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:44.486625    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:44.490457    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:02:44.491475    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:44.491475    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:44.491475    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:44 GMT
	I0807 20:02:44.491475    1172 round_trippers.go:580]     Audit-Id: 83c10ef6-e38e-43c9-90be-166e13e62969
	I0807 20:02:44.491475    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:44.491566    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:44.491566    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:44.492619    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"1987","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0807 20:02:44.985369    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:44.985369    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:44.985795    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:44.985795    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:44.989257    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:02:44.990136    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:44.990136    1172 round_trippers.go:580]     Audit-Id: 61e25bf3-66fc-4383-bc7a-2e7101f62f08
	I0807 20:02:44.990136    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:44.990136    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:44.990136    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:44.990136    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:44.990136    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:45 GMT
	I0807 20:02:44.990450    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"1987","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0807 20:02:45.482904    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:45.483008    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:45.483008    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:45.483008    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:45.489500    1172 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0807 20:02:45.489500    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:45.489500    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:45.489500    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:45.489500    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:45.489500    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:45.489500    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:45 GMT
	I0807 20:02:45.489500    1172 round_trippers.go:580]     Audit-Id: 1b07f12c-f95b-4bad-87f4-3a576b350463
	I0807 20:02:45.490124    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"1987","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0807 20:02:45.982402    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:45.982578    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:45.982578    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:45.982578    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:45.987197    1172 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 20:02:45.987478    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:45.987478    1172 round_trippers.go:580]     Audit-Id: dfdb34cd-3020-402e-a447-ffad979b8f13
	I0807 20:02:45.987478    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:45.987478    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:45.987478    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:45.987478    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:45.987478    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:46 GMT
	I0807 20:02:45.987478    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"1987","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0807 20:02:46.481628    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:46.481722    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:46.481722    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:46.481722    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:46.488351    1172 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0807 20:02:46.488351    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:46.488351    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:46.488351    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:46.488351    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:46.488351    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:46 GMT
	I0807 20:02:46.488351    1172 round_trippers.go:580]     Audit-Id: 58679323-d3ae-4597-8ff0-bc6b11c17152
	I0807 20:02:46.488351    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:46.488351    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"1987","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0807 20:02:46.489257    1172 node_ready.go:53] node "multinode-116700" has status "Ready":"False"
	I0807 20:02:46.979003    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:46.979057    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:46.979057    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:46.979057    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:46.982838    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:02:46.983739    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:46.983739    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:46.983739    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:46.983739    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:46.983739    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:47 GMT
	I0807 20:02:46.983739    1172 round_trippers.go:580]     Audit-Id: 2244b236-b37d-4823-91cf-facd140b9012
	I0807 20:02:46.983739    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:46.983940    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"1987","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0807 20:02:47.475415    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:47.475506    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:47.475506    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:47.475506    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:47.480815    1172 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 20:02:47.481346    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:47.481423    1172 round_trippers.go:580]     Audit-Id: a3e73824-025c-4479-a460-44d386efb72d
	I0807 20:02:47.481423    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:47.481423    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:47.481423    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:47.481423    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:47.481423    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:47 GMT
	I0807 20:02:47.481850    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"2005","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0807 20:02:47.482239    1172 node_ready.go:49] node "multinode-116700" has status "Ready":"True"
	I0807 20:02:47.482239    1172 node_ready.go:38] duration metric: took 15.5095743s for node "multinode-116700" to be "Ready" ...
	I0807 20:02:47.482239    1172 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0807 20:02:47.482239    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods
	I0807 20:02:47.482239    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:47.482239    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:47.482239    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:47.491773    1172 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0807 20:02:47.491773    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:47.491773    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:47.491773    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:47.491773    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:47 GMT
	I0807 20:02:47.491773    1172 round_trippers.go:580]     Audit-Id: 0c5acbaa-b000-4adf-bf7d-1b0b72dc8274
	I0807 20:02:47.491773    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:47.491773    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:47.493552    1172 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"2006"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-7l6v2","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7de73f9c-93d9-46c6-ae10-b253dd257a19","resourceVersion":"1920","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"26f3775d-afda-4408-b967-dc333bdd23fc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26f3775d-afda-4408-b967-dc333bdd23fc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86163 chars]
	I0807 20:02:47.497680    1172 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-7l6v2" in "kube-system" namespace to be "Ready" ...
	I0807 20:02:47.497680    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7l6v2
	I0807 20:02:47.497680    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:47.497680    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:47.497680    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:47.500355    1172 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 20:02:47.500355    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:47.500355    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:47.500637    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:47.500637    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:47 GMT
	I0807 20:02:47.500637    1172 round_trippers.go:580]     Audit-Id: 5e61182d-f183-4f4e-a5f4-4d60c4bc7da8
	I0807 20:02:47.500637    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:47.500637    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:47.500956    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7l6v2","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7de73f9c-93d9-46c6-ae10-b253dd257a19","resourceVersion":"1920","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"26f3775d-afda-4408-b967-dc333bdd23fc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26f3775d-afda-4408-b967-dc333bdd23fc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0807 20:02:47.501218    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:47.501218    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:47.501218    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:47.501218    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:47.504426    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:02:47.504723    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:47.504723    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:47.504723    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:47.504723    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:47 GMT
	I0807 20:02:47.504723    1172 round_trippers.go:580]     Audit-Id: 93bbe4f5-4c0f-4faf-a919-44179b91a49f
	I0807 20:02:47.504723    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:47.504723    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:47.505103    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"2005","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0807 20:02:48.002331    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7l6v2
	I0807 20:02:48.002331    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:48.002331    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:48.002331    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:48.006680    1172 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 20:02:48.007403    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:48.007403    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:48.007403    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:48.007403    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:48 GMT
	I0807 20:02:48.007403    1172 round_trippers.go:580]     Audit-Id: da2ddd1d-5874-4b52-9140-bd62fc152298
	I0807 20:02:48.007403    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:48.007403    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:48.007672    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7l6v2","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7de73f9c-93d9-46c6-ae10-b253dd257a19","resourceVersion":"1920","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"26f3775d-afda-4408-b967-dc333bdd23fc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26f3775d-afda-4408-b967-dc333bdd23fc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0807 20:02:48.008372    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:48.008372    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:48.008372    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:48.008372    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:48.013670    1172 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 20:02:48.013670    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:48.013670    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:48.013670    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:48.013670    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:48 GMT
	I0807 20:02:48.013670    1172 round_trippers.go:580]     Audit-Id: 7c24c2c0-f664-4f82-8573-be820f144457
	I0807 20:02:48.013670    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:48.013670    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:48.014362    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"2005","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0807 20:02:48.503844    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7l6v2
	I0807 20:02:48.503844    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:48.503844    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:48.503844    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:48.508442    1172 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 20:02:48.508442    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:48.508442    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:48.508442    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:48.508442    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:48.508442    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:48.508442    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:48 GMT
	I0807 20:02:48.508442    1172 round_trippers.go:580]     Audit-Id: 57ad2480-4f29-4049-89ec-e2d8d906d4de
	I0807 20:02:48.509754    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7l6v2","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7de73f9c-93d9-46c6-ae10-b253dd257a19","resourceVersion":"1920","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"26f3775d-afda-4408-b967-dc333bdd23fc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26f3775d-afda-4408-b967-dc333bdd23fc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0807 20:02:48.510868    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:48.510868    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:48.510868    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:48.510938    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:48.513064    1172 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 20:02:48.514056    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:48.514056    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:48.514056    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:48.514056    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:48 GMT
	I0807 20:02:48.514056    1172 round_trippers.go:580]     Audit-Id: c16c9408-aab1-4070-805c-61a05c35ee3b
	I0807 20:02:48.514056    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:48.514138    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:48.514376    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"2005","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0807 20:02:49.001170    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7l6v2
	I0807 20:02:49.001246    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:49.001246    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:49.001246    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:49.004571    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:02:49.005444    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:49.005444    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:49.005444    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:49.005444    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:49.005444    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:49.005444    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:49 GMT
	I0807 20:02:49.005444    1172 round_trippers.go:580]     Audit-Id: 7ea24022-e900-45c8-a69b-f490dc00ac9c
	I0807 20:02:49.005444    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7l6v2","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7de73f9c-93d9-46c6-ae10-b253dd257a19","resourceVersion":"1920","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"26f3775d-afda-4408-b967-dc333bdd23fc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26f3775d-afda-4408-b967-dc333bdd23fc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0807 20:02:49.006426    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:49.006426    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:49.006495    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:49.006495    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:49.009854    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:02:49.009996    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:49.009996    1172 round_trippers.go:580]     Audit-Id: 6b02ea79-4576-4568-8658-66daa564c002
	I0807 20:02:49.009996    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:49.009996    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:49.009996    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:49.009996    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:49.009996    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:49 GMT
	I0807 20:02:49.010300    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"2005","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0807 20:02:49.502735    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7l6v2
	I0807 20:02:49.502735    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:49.502735    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:49.502735    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:49.507382    1172 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 20:02:49.507599    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:49.507599    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:49.507599    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:49.507599    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:49.507599    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:49.507599    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:49 GMT
	I0807 20:02:49.507599    1172 round_trippers.go:580]     Audit-Id: 2b3bb48d-eab6-444e-9c0b-95d9a5e868cd
	I0807 20:02:49.507998    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7l6v2","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7de73f9c-93d9-46c6-ae10-b253dd257a19","resourceVersion":"1920","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"26f3775d-afda-4408-b967-dc333bdd23fc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26f3775d-afda-4408-b967-dc333bdd23fc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0807 20:02:49.508675    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:49.508675    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:49.508675    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:49.508675    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:49.512462    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:02:49.512591    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:49.512591    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:49.512591    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:49.512591    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:49.512591    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:49 GMT
	I0807 20:02:49.512591    1172 round_trippers.go:580]     Audit-Id: 6b14fa3a-28f7-45cb-af03-50a0840dfd16
	I0807 20:02:49.512591    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:49.513099    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"2010","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0807 20:02:49.513646    1172 pod_ready.go:102] pod "coredns-7db6d8ff4d-7l6v2" in "kube-system" namespace has status "Ready":"False"
	I0807 20:02:50.000384    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7l6v2
	I0807 20:02:50.000454    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:50.000454    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:50.000454    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:50.004772    1172 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 20:02:50.005311    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:50.005311    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:50.005311    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:50 GMT
	I0807 20:02:50.005311    1172 round_trippers.go:580]     Audit-Id: 95572ba9-7464-49c4-a689-18e57ba8cefc
	I0807 20:02:50.005311    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:50.005311    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:50.005311    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:50.005714    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7l6v2","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7de73f9c-93d9-46c6-ae10-b253dd257a19","resourceVersion":"1920","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"26f3775d-afda-4408-b967-dc333bdd23fc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26f3775d-afda-4408-b967-dc333bdd23fc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0807 20:02:50.007076    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:50.007076    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:50.007114    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:50.007114    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:50.014435    1172 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0807 20:02:50.014540    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:50.014540    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:50.014540    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:50.014540    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:50.014602    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:50 GMT
	I0807 20:02:50.014602    1172 round_trippers.go:580]     Audit-Id: 59a2a45a-ef3b-4f89-8c92-f641a597dd36
	I0807 20:02:50.014627    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:50.014627    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"2010","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0807 20:02:50.513192    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7l6v2
	I0807 20:02:50.513192    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:50.513192    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:50.513192    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:50.517124    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:02:50.517124    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:50.517124    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:50.517124    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:50.517124    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:50 GMT
	I0807 20:02:50.517124    1172 round_trippers.go:580]     Audit-Id: 1d958595-0b12-4811-9877-94900d46f196
	I0807 20:02:50.517124    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:50.517124    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:50.518163    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7l6v2","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7de73f9c-93d9-46c6-ae10-b253dd257a19","resourceVersion":"1920","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"26f3775d-afda-4408-b967-dc333bdd23fc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26f3775d-afda-4408-b967-dc333bdd23fc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0807 20:02:50.519162    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:50.519162    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:50.519162    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:50.519162    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:50.522151    1172 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 20:02:50.522151    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:50.522151    1172 round_trippers.go:580]     Audit-Id: dc19d8ce-6ff4-47f9-bd1f-d8af53d60dd2
	I0807 20:02:50.522278    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:50.522278    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:50.522278    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:50.522278    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:50.522278    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:50 GMT
	I0807 20:02:50.522736    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"2010","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0807 20:02:50.999686    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7l6v2
	I0807 20:02:50.999745    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:50.999745    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:50.999745    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:51.007056    1172 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0807 20:02:51.007169    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:51.007169    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:51.007228    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:51 GMT
	I0807 20:02:51.007228    1172 round_trippers.go:580]     Audit-Id: e3625428-9c77-4943-897c-8292e1205ce2
	I0807 20:02:51.007228    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:51.007257    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:51.007257    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:51.007257    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7l6v2","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7de73f9c-93d9-46c6-ae10-b253dd257a19","resourceVersion":"1920","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"26f3775d-afda-4408-b967-dc333bdd23fc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26f3775d-afda-4408-b967-dc333bdd23fc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0807 20:02:51.008124    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:51.008124    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:51.008124    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:51.008124    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:51.010471    1172 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 20:02:51.011496    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:51.011496    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:51.011496    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:51.011496    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:51 GMT
	I0807 20:02:51.011496    1172 round_trippers.go:580]     Audit-Id: d0830fbb-7cc2-47cb-b344-1350030b4d7d
	I0807 20:02:51.011496    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:51.011496    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:51.011496    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"2010","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0807 20:02:51.499524    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7l6v2
	I0807 20:02:51.499726    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:51.499726    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:51.499795    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:51.504333    1172 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 20:02:51.504333    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:51.504333    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:51 GMT
	I0807 20:02:51.504778    1172 round_trippers.go:580]     Audit-Id: 71998f35-385c-43de-86f9-3cc7b8bc4baf
	I0807 20:02:51.504778    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:51.504778    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:51.504778    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:51.504778    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:51.505202    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7l6v2","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7de73f9c-93d9-46c6-ae10-b253dd257a19","resourceVersion":"1920","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"26f3775d-afda-4408-b967-dc333bdd23fc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26f3775d-afda-4408-b967-dc333bdd23fc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0807 20:02:51.506035    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:51.506101    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:51.506101    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:51.506101    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:51.508301    1172 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 20:02:51.508301    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:51.508301    1172 round_trippers.go:580]     Audit-Id: 50a601e2-d68b-495a-aecd-4f40f5d35dd4
	I0807 20:02:51.508301    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:51.508301    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:51.508301    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:51.508301    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:51.508301    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:51 GMT
	I0807 20:02:51.508790    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"2010","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0807 20:02:52.012592    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7l6v2
	I0807 20:02:52.012592    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:52.012733    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:52.012733    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:52.015520    1172 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 20:02:52.016543    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:52.016543    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:52.016543    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:52.016543    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:52.016543    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:52.016543    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:52 GMT
	I0807 20:02:52.016543    1172 round_trippers.go:580]     Audit-Id: ce3b1b8b-150e-4dca-b9cd-2ab4c411b1b5
	I0807 20:02:52.016774    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7l6v2","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7de73f9c-93d9-46c6-ae10-b253dd257a19","resourceVersion":"1920","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"26f3775d-afda-4408-b967-dc333bdd23fc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26f3775d-afda-4408-b967-dc333bdd23fc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0807 20:02:52.017843    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:52.017906    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:52.017906    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:52.017906    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:52.021116    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:02:52.021116    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:52.021116    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:52.021116    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:52.021116    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:52.021116    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:52.021116    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:52 GMT
	I0807 20:02:52.021116    1172 round_trippers.go:580]     Audit-Id: 24819833-a381-4585-957b-0fdb395d0949
	I0807 20:02:52.021116    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"2010","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0807 20:02:52.022082    1172 pod_ready.go:102] pod "coredns-7db6d8ff4d-7l6v2" in "kube-system" namespace has status "Ready":"False"
	I0807 20:02:52.500560    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7l6v2
	I0807 20:02:52.500560    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:52.500560    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:52.500560    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:52.505368    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:02:52.505368    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:52.505368    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:52 GMT
	I0807 20:02:52.505368    1172 round_trippers.go:580]     Audit-Id: 4f849644-05ad-472b-84a7-4b3b6c07e57e
	I0807 20:02:52.505368    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:52.505368    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:52.505368    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:52.505368    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:52.505706    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7l6v2","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7de73f9c-93d9-46c6-ae10-b253dd257a19","resourceVersion":"1920","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"26f3775d-afda-4408-b967-dc333bdd23fc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26f3775d-afda-4408-b967-dc333bdd23fc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0807 20:02:52.506653    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:52.506653    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:52.506653    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:52.506653    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:52.510287    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:02:52.510287    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:52.510287    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:52.510287    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:52.510287    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:52.510287    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:52 GMT
	I0807 20:02:52.510287    1172 round_trippers.go:580]     Audit-Id: 2a14043f-7e43-44f1-bff1-2a6a371b9028
	I0807 20:02:52.510287    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:52.511139    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"2010","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0807 20:02:53.001439    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7l6v2
	I0807 20:02:53.001439    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:53.001688    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:53.001688    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:53.006302    1172 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 20:02:53.006855    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:53.006924    1172 round_trippers.go:580]     Audit-Id: 7e1c77eb-e021-4678-a1e8-d07012b5bde0
	I0807 20:02:53.006968    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:53.006968    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:53.007050    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:53.007050    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:53.007050    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:53 GMT
	I0807 20:02:53.008209    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7l6v2","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7de73f9c-93d9-46c6-ae10-b253dd257a19","resourceVersion":"1920","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"26f3775d-afda-4408-b967-dc333bdd23fc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26f3775d-afda-4408-b967-dc333bdd23fc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0807 20:02:53.008858    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:53.008858    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:53.008858    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:53.008858    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:53.014031    1172 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 20:02:53.014031    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:53.014031    1172 round_trippers.go:580]     Audit-Id: b43739a0-4a0c-4e9f-89be-3b28bd826ad8
	I0807 20:02:53.014031    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:53.014031    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:53.014031    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:53.014031    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:53.014031    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:53 GMT
	I0807 20:02:53.014860    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"2010","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0807 20:02:53.501743    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7l6v2
	I0807 20:02:53.501743    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:53.501868    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:53.501868    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:53.505202    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:02:53.505202    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:53.505202    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:53.505202    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:53.505202    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:53.505202    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:53 GMT
	I0807 20:02:53.505202    1172 round_trippers.go:580]     Audit-Id: 74e35494-5f03-476d-a122-f8167c96d2ed
	I0807 20:02:53.505202    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:53.506481    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7l6v2","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7de73f9c-93d9-46c6-ae10-b253dd257a19","resourceVersion":"1920","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"26f3775d-afda-4408-b967-dc333bdd23fc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26f3775d-afda-4408-b967-dc333bdd23fc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0807 20:02:53.507315    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:53.507381    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:53.507381    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:53.507381    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:53.510645    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:02:53.510845    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:53.510973    1172 round_trippers.go:580]     Audit-Id: 10739d98-48fd-4e73-b460-8027706ecff8
	I0807 20:02:53.510973    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:53.510973    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:53.510973    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:53.510973    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:53.510973    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:53 GMT
	I0807 20:02:53.511214    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"2010","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0807 20:02:54.001411    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7l6v2
	I0807 20:02:54.001411    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:54.001512    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:54.001512    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:54.005959    1172 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 20:02:54.006527    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:54.006527    1172 round_trippers.go:580]     Audit-Id: 996b2645-bb48-4b0a-999e-4f29313ff17d
	I0807 20:02:54.006527    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:54.006527    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:54.006527    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:54.006527    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:54.006527    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:54 GMT
	I0807 20:02:54.006710    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7l6v2","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7de73f9c-93d9-46c6-ae10-b253dd257a19","resourceVersion":"1920","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"26f3775d-afda-4408-b967-dc333bdd23fc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26f3775d-afda-4408-b967-dc333bdd23fc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0807 20:02:54.007761    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:54.007761    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:54.007761    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:54.007814    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:54.009540    1172 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0807 20:02:54.010553    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:54.010553    1172 round_trippers.go:580]     Audit-Id: a5de278f-c66e-4c90-98cf-1ae670caa8ad
	I0807 20:02:54.010553    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:54.010553    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:54.010624    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:54.010624    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:54.010624    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:54 GMT
	I0807 20:02:54.010845    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"2010","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0807 20:02:54.500449    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7l6v2
	I0807 20:02:54.500449    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:54.500449    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:54.500449    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:54.507714    1172 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0807 20:02:54.507797    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:54.507797    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:54.507797    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:54.507944    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:54.507944    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:54 GMT
	I0807 20:02:54.507944    1172 round_trippers.go:580]     Audit-Id: ca3c47f3-5ecc-4501-a05a-ed7e8fbf427f
	I0807 20:02:54.507944    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:54.507985    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7l6v2","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7de73f9c-93d9-46c6-ae10-b253dd257a19","resourceVersion":"1920","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"26f3775d-afda-4408-b967-dc333bdd23fc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26f3775d-afda-4408-b967-dc333bdd23fc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0807 20:02:54.508892    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:54.508892    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:54.508942    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:54.508942    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:54.511128    1172 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 20:02:54.511512    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:54.511512    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:54.511512    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:54.511512    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:54 GMT
	I0807 20:02:54.511512    1172 round_trippers.go:580]     Audit-Id: 2c932c84-4dca-4dfc-8549-03d5c2a83397
	I0807 20:02:54.511512    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:54.511512    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:54.511605    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"2010","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0807 20:02:54.511605    1172 pod_ready.go:102] pod "coredns-7db6d8ff4d-7l6v2" in "kube-system" namespace has status "Ready":"False"
	I0807 20:02:55.001627    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7l6v2
	I0807 20:02:55.001627    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:55.001627    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:55.001627    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:55.006970    1172 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 20:02:55.007042    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:55.007042    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:55.007134    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:55.007134    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:55.007134    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:55 GMT
	I0807 20:02:55.007134    1172 round_trippers.go:580]     Audit-Id: ff67b1af-3a9b-4fea-b317-060feb112841
	I0807 20:02:55.007134    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:55.007463    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7l6v2","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7de73f9c-93d9-46c6-ae10-b253dd257a19","resourceVersion":"1920","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"26f3775d-afda-4408-b967-dc333bdd23fc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26f3775d-afda-4408-b967-dc333bdd23fc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0807 20:02:55.008224    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:55.008224    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:55.008345    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:55.008345    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:55.011645    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:02:55.011645    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:55.011645    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:55.011645    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:55.011645    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:55.011645    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:55 GMT
	I0807 20:02:55.011645    1172 round_trippers.go:580]     Audit-Id: f1b58de9-2b8a-4077-91e3-bfb599ba2a87
	I0807 20:02:55.011645    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:55.011645    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"2010","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0807 20:02:55.500757    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7l6v2
	I0807 20:02:55.500757    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:55.500757    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:55.500757    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:55.505292    1172 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 20:02:55.505365    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:55.505365    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:55.505365    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:55.505365    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:55.505464    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:55.505464    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:55 GMT
	I0807 20:02:55.505464    1172 round_trippers.go:580]     Audit-Id: 1b4ff5a8-e965-4852-9241-756420014182
	I0807 20:02:55.506180    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7l6v2","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7de73f9c-93d9-46c6-ae10-b253dd257a19","resourceVersion":"1920","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"26f3775d-afda-4408-b967-dc333bdd23fc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26f3775d-afda-4408-b967-dc333bdd23fc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0807 20:02:55.506977    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:55.507051    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:55.507051    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:55.507051    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:55.512046    1172 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 20:02:55.512046    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:55.512046    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:55 GMT
	I0807 20:02:55.512046    1172 round_trippers.go:580]     Audit-Id: 475d8975-fc07-4356-91fe-93376834f2c3
	I0807 20:02:55.512046    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:55.512046    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:55.512046    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:55.512046    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:55.512046    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"2010","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0807 20:02:55.999308    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7l6v2
	I0807 20:02:55.999308    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:55.999308    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:55.999308    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:56.002984    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:02:56.002984    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:56.003839    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:56 GMT
	I0807 20:02:56.003839    1172 round_trippers.go:580]     Audit-Id: cb71f139-3006-4907-b5b0-cc6660b63193
	I0807 20:02:56.003839    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:56.003839    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:56.003839    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:56.003839    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:56.004765    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7l6v2","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7de73f9c-93d9-46c6-ae10-b253dd257a19","resourceVersion":"1920","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"26f3775d-afda-4408-b967-dc333bdd23fc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26f3775d-afda-4408-b967-dc333bdd23fc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0807 20:02:56.005675    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:56.005675    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:56.005782    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:56.005782    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:56.010216    1172 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 20:02:56.010238    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:56.010238    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:56 GMT
	I0807 20:02:56.010238    1172 round_trippers.go:580]     Audit-Id: 5d9a95bb-ccea-4dd3-9584-e899b2ee0df8
	I0807 20:02:56.010238    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:56.010238    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:56.010311    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:56.010311    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:56.011344    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"2010","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0807 20:02:56.514629    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7l6v2
	I0807 20:02:56.514684    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:56.514730    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:56.514730    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:56.519263    1172 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 20:02:56.519263    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:56.519263    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:56.519263    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:56 GMT
	I0807 20:02:56.519263    1172 round_trippers.go:580]     Audit-Id: df9eb2a6-4918-4f63-95c5-358f95169b8f
	I0807 20:02:56.519811    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:56.519811    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:56.519811    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:56.520176    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7l6v2","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7de73f9c-93d9-46c6-ae10-b253dd257a19","resourceVersion":"1920","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"26f3775d-afda-4408-b967-dc333bdd23fc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26f3775d-afda-4408-b967-dc333bdd23fc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0807 20:02:56.520606    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:56.520606    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:56.521137    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:56.521137    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:56.522833    1172 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0807 20:02:56.522833    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:56.522833    1172 round_trippers.go:580]     Audit-Id: 86c00543-9d8a-4f77-a02c-dfdec503fdde
	I0807 20:02:56.522833    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:56.522833    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:56.522833    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:56.522833    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:56.523834    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:56 GMT
	I0807 20:02:56.524156    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"2010","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0807 20:02:56.524835    1172 pod_ready.go:102] pod "coredns-7db6d8ff4d-7l6v2" in "kube-system" namespace has status "Ready":"False"
	I0807 20:02:57.012663    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7l6v2
	I0807 20:02:57.012842    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:57.012842    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:57.012842    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:57.017000    1172 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 20:02:57.017000    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:57.017000    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:57 GMT
	I0807 20:02:57.017000    1172 round_trippers.go:580]     Audit-Id: 59d5668c-eb36-49b2-9698-697db15ea1ff
	I0807 20:02:57.017000    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:57.017000    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:57.017000    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:57.017000    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:57.020655    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7l6v2","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7de73f9c-93d9-46c6-ae10-b253dd257a19","resourceVersion":"1920","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"26f3775d-afda-4408-b967-dc333bdd23fc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26f3775d-afda-4408-b967-dc333bdd23fc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0807 20:02:57.021446    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:57.021446    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:57.021560    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:57.021560    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:57.023849    1172 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 20:02:57.023849    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:57.023849    1172 round_trippers.go:580]     Audit-Id: 3427f1aa-7aaa-4cc7-bb2c-128d6b4871a2
	I0807 20:02:57.023849    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:57.023849    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:57.023849    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:57.024235    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:57.024235    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:57 GMT
	I0807 20:02:57.024314    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"2010","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0807 20:02:57.499241    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7l6v2
	I0807 20:02:57.499360    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:57.499360    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:57.499360    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:57.503200    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:02:57.503200    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:57.503566    1172 round_trippers.go:580]     Audit-Id: 10ef5145-48e9-495c-909a-06334b149822
	I0807 20:02:57.503566    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:57.503566    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:57.503566    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:57.503566    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:57.503566    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:57 GMT
	I0807 20:02:57.503761    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7l6v2","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7de73f9c-93d9-46c6-ae10-b253dd257a19","resourceVersion":"1920","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"26f3775d-afda-4408-b967-dc333bdd23fc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26f3775d-afda-4408-b967-dc333bdd23fc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0807 20:02:57.504516    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:57.504516    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:57.504516    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:57.504516    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:57.507421    1172 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 20:02:57.507421    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:57.507421    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:57.507421    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:57.507421    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:57.507421    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:57 GMT
	I0807 20:02:57.507421    1172 round_trippers.go:580]     Audit-Id: eabb99dc-bc8e-42f6-9d49-686628ff47e8
	I0807 20:02:57.507421    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:57.507421    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"2010","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0807 20:02:57.999838    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7l6v2
	I0807 20:02:57.999914    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:57.999914    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:57.999914    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:58.008837    1172 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0807 20:02:58.008837    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:58.008837    1172 round_trippers.go:580]     Audit-Id: 44bc3223-b83f-4a44-a1ae-c4641b9c0452
	I0807 20:02:58.008837    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:58.008837    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:58.008837    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:58.008837    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:58.008837    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:58 GMT
	I0807 20:02:58.008837    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7l6v2","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7de73f9c-93d9-46c6-ae10-b253dd257a19","resourceVersion":"1920","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"26f3775d-afda-4408-b967-dc333bdd23fc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26f3775d-afda-4408-b967-dc333bdd23fc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0807 20:02:58.009675    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:58.009675    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:58.009675    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:58.009675    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:58.012309    1172 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 20:02:58.013392    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:58.013392    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:58.013392    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:58.013428    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:58 GMT
	I0807 20:02:58.013428    1172 round_trippers.go:580]     Audit-Id: 52ae4a43-1208-4820-8f64-eca2ee7e288d
	I0807 20:02:58.013428    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:58.013428    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:58.013648    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"2010","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0807 20:02:58.499327    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7l6v2
	I0807 20:02:58.499327    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:58.499327    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:58.499327    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:58.504915    1172 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 20:02:58.504915    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:58.504915    1172 round_trippers.go:580]     Audit-Id: b4f4dfcb-300b-4c32-a5af-d5720b7ad022
	I0807 20:02:58.504915    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:58.505144    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:58.505144    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:58.505144    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:58.505144    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:58 GMT
	I0807 20:02:58.505331    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7l6v2","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7de73f9c-93d9-46c6-ae10-b253dd257a19","resourceVersion":"1920","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"26f3775d-afda-4408-b967-dc333bdd23fc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26f3775d-afda-4408-b967-dc333bdd23fc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0807 20:02:58.506140    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:58.506254    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:58.506254    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:58.506254    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:58.509185    1172 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 20:02:58.509185    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:58.509185    1172 round_trippers.go:580]     Audit-Id: ad60ac05-592b-4aef-a5a7-10940feac597
	I0807 20:02:58.509185    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:58.509350    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:58.509350    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:58.509350    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:58.509350    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:58 GMT
	I0807 20:02:58.509693    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"2010","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0807 20:02:58.999631    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7l6v2
	I0807 20:02:58.999869    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:58.999998    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:58.999998    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:59.004560    1172 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 20:02:59.004560    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:59.004560    1172 round_trippers.go:580]     Audit-Id: 496115ba-382f-47f1-9ff2-7d902a5991d7
	I0807 20:02:59.005012    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:59.005012    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:59.005071    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:59.005071    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:59.005114    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:59 GMT
	I0807 20:02:59.005320    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7l6v2","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7de73f9c-93d9-46c6-ae10-b253dd257a19","resourceVersion":"1920","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"26f3775d-afda-4408-b967-dc333bdd23fc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26f3775d-afda-4408-b967-dc333bdd23fc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0807 20:02:59.006121    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:59.006149    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:59.006149    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:59.006149    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:59.009143    1172 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 20:02:59.009143    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:59.009143    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:59.009143    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:59.009143    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:59 GMT
	I0807 20:02:59.009143    1172 round_trippers.go:580]     Audit-Id: 39e19874-a5d9-4fb9-bb04-90e4f8dd41fc
	I0807 20:02:59.009143    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:59.009143    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:59.009831    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"2010","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0807 20:02:59.010353    1172 pod_ready.go:102] pod "coredns-7db6d8ff4d-7l6v2" in "kube-system" namespace has status "Ready":"False"
	I0807 20:02:59.500331    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7l6v2
	I0807 20:02:59.500400    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:59.500400    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:59.500400    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:59.506464    1172 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0807 20:02:59.506464    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:59.506464    1172 round_trippers.go:580]     Audit-Id: 3ab00976-8899-4485-aadd-7d5de388497f
	I0807 20:02:59.506464    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:59.506464    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:59.506464    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:59.506464    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:59.506464    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:59 GMT
	I0807 20:02:59.507362    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7l6v2","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7de73f9c-93d9-46c6-ae10-b253dd257a19","resourceVersion":"1920","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"26f3775d-afda-4408-b967-dc333bdd23fc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26f3775d-afda-4408-b967-dc333bdd23fc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0807 20:02:59.507362    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:02:59.507362    1172 round_trippers.go:469] Request Headers:
	I0807 20:02:59.507362    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:02:59.507362    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:02:59.511393    1172 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 20:02:59.511393    1172 round_trippers.go:577] Response Headers:
	I0807 20:02:59.511393    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:02:59.511393    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:02:59.511393    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:02:59 GMT
	I0807 20:02:59.511393    1172 round_trippers.go:580]     Audit-Id: cc2940a5-8c80-4e39-8a58-98ef620053ea
	I0807 20:02:59.511393    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:02:59.511393    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:02:59.512383    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"2010","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0807 20:03:00.008214    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7l6v2
	I0807 20:03:00.008530    1172 round_trippers.go:469] Request Headers:
	I0807 20:03:00.008616    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:03:00.008616    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:03:00.013096    1172 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 20:03:00.013096    1172 round_trippers.go:577] Response Headers:
	I0807 20:03:00.013096    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:03:00 GMT
	I0807 20:03:00.013096    1172 round_trippers.go:580]     Audit-Id: b4d991b9-4d31-41fc-83a0-0204593a7401
	I0807 20:03:00.013096    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:03:00.013096    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:03:00.013096    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:03:00.013096    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:03:00.013096    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7l6v2","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7de73f9c-93d9-46c6-ae10-b253dd257a19","resourceVersion":"1920","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"26f3775d-afda-4408-b967-dc333bdd23fc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26f3775d-afda-4408-b967-dc333bdd23fc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0807 20:03:00.014089    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:03:00.014089    1172 round_trippers.go:469] Request Headers:
	I0807 20:03:00.014089    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:03:00.014089    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:03:00.020100    1172 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0807 20:03:00.020190    1172 round_trippers.go:577] Response Headers:
	I0807 20:03:00.020190    1172 round_trippers.go:580]     Audit-Id: 2faffb69-bd3f-4f2f-9f5a-2984a8099eb6
	I0807 20:03:00.020190    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:03:00.020190    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:03:00.020190    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:03:00.020190    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:03:00.020257    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:03:00 GMT
	I0807 20:03:00.020257    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"2010","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0807 20:03:00.498974    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7l6v2
	I0807 20:03:00.498974    1172 round_trippers.go:469] Request Headers:
	I0807 20:03:00.498974    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:03:00.498974    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:03:00.507990    1172 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0807 20:03:00.508302    1172 round_trippers.go:577] Response Headers:
	I0807 20:03:00.508302    1172 round_trippers.go:580]     Audit-Id: 5dc31837-79cc-48c5-9a06-455e6ef855cc
	I0807 20:03:00.508302    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:03:00.508302    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:03:00.508302    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:03:00.508302    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:03:00.508302    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:03:00 GMT
	I0807 20:03:00.508569    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7l6v2","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7de73f9c-93d9-46c6-ae10-b253dd257a19","resourceVersion":"2028","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"26f3775d-afda-4408-b967-dc333bdd23fc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26f3775d-afda-4408-b967-dc333bdd23fc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7017 chars]
	I0807 20:03:00.509839    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:03:00.509917    1172 round_trippers.go:469] Request Headers:
	I0807 20:03:00.509917    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:03:00.509917    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:03:00.518265    1172 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0807 20:03:00.518265    1172 round_trippers.go:577] Response Headers:
	I0807 20:03:00.518265    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:03:00 GMT
	I0807 20:03:00.518265    1172 round_trippers.go:580]     Audit-Id: 27f742d6-064a-4039-8044-1a09157d974f
	I0807 20:03:00.518265    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:03:00.518265    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:03:00.518265    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:03:00.518265    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:03:00.518265    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"2010","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0807 20:03:00.999824    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7l6v2
	I0807 20:03:01.000093    1172 round_trippers.go:469] Request Headers:
	I0807 20:03:01.000093    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:03:01.000093    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:03:01.004567    1172 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 20:03:01.004567    1172 round_trippers.go:577] Response Headers:
	I0807 20:03:01.004567    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:03:01.004567    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:03:01.004567    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:03:01.004567    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:03:01.004567    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:03:01 GMT
	I0807 20:03:01.004567    1172 round_trippers.go:580]     Audit-Id: 114929a0-f56b-4589-b6ab-6cc30437105d
	I0807 20:03:01.004567    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7l6v2","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7de73f9c-93d9-46c6-ae10-b253dd257a19","resourceVersion":"2028","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"26f3775d-afda-4408-b967-dc333bdd23fc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26f3775d-afda-4408-b967-dc333bdd23fc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7017 chars]
	I0807 20:03:01.005603    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:03:01.005603    1172 round_trippers.go:469] Request Headers:
	I0807 20:03:01.005603    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:03:01.005603    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:03:01.007919    1172 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 20:03:01.007919    1172 round_trippers.go:577] Response Headers:
	I0807 20:03:01.007919    1172 round_trippers.go:580]     Audit-Id: 317cfa20-7f59-4a15-a483-36136dce8fd0
	I0807 20:03:01.007919    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:03:01.007919    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:03:01.007919    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:03:01.007919    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:03:01.008525    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:03:01 GMT
	I0807 20:03:01.008782    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"2010","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0807 20:03:01.501149    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7l6v2
	I0807 20:03:01.501206    1172 round_trippers.go:469] Request Headers:
	I0807 20:03:01.501206    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:03:01.501264    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:03:01.505599    1172 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 20:03:01.505879    1172 round_trippers.go:577] Response Headers:
	I0807 20:03:01.505879    1172 round_trippers.go:580]     Audit-Id: 8c9e0de0-89f4-4e83-ad2b-e5faa8a96887
	I0807 20:03:01.505936    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:03:01.505936    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:03:01.505936    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:03:01.505936    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:03:01.505936    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:03:01 GMT
	I0807 20:03:01.505936    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7l6v2","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7de73f9c-93d9-46c6-ae10-b253dd257a19","resourceVersion":"2028","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"26f3775d-afda-4408-b967-dc333bdd23fc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26f3775d-afda-4408-b967-dc333bdd23fc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7017 chars]
	I0807 20:03:01.507525    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:03:01.507525    1172 round_trippers.go:469] Request Headers:
	I0807 20:03:01.507525    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:03:01.507525    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:03:01.513430    1172 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 20:03:01.513636    1172 round_trippers.go:577] Response Headers:
	I0807 20:03:01.513636    1172 round_trippers.go:580]     Audit-Id: 5b8dfcea-550e-4984-ad4d-903009f40f5b
	I0807 20:03:01.513636    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:03:01.513636    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:03:01.513636    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:03:01.513636    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:03:01.513636    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:03:01 GMT
	I0807 20:03:01.514168    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"2010","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0807 20:03:01.514383    1172 pod_ready.go:102] pod "coredns-7db6d8ff4d-7l6v2" in "kube-system" namespace has status "Ready":"False"
	I0807 20:03:02.005113    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7l6v2
	I0807 20:03:02.005113    1172 round_trippers.go:469] Request Headers:
	I0807 20:03:02.005113    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:03:02.005113    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:03:02.008733    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:03:02.008733    1172 round_trippers.go:577] Response Headers:
	I0807 20:03:02.009483    1172 round_trippers.go:580]     Audit-Id: 3cb274e2-d5c3-4ad3-bdae-daee9175a420
	I0807 20:03:02.009483    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:03:02.009483    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:03:02.009483    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:03:02.009580    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:03:02.009580    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:03:02 GMT
	I0807 20:03:02.009635    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-7l6v2","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7de73f9c-93d9-46c6-ae10-b253dd257a19","resourceVersion":"2034","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"26f3775d-afda-4408-b967-dc333bdd23fc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26f3775d-afda-4408-b967-dc333bdd23fc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6788 chars]
	I0807 20:03:02.010550    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:03:02.010550    1172 round_trippers.go:469] Request Headers:
	I0807 20:03:02.010550    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:03:02.010550    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:03:02.015692    1172 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 20:03:02.015769    1172 round_trippers.go:577] Response Headers:
	I0807 20:03:02.015769    1172 round_trippers.go:580]     Audit-Id: 4822affe-a42e-41bc-bf44-8b144609c799
	I0807 20:03:02.015769    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:03:02.015769    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:03:02.015769    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:03:02.015845    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:03:02.015865    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:03:02 GMT
	I0807 20:03:02.017474    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"2010","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0807 20:03:02.017670    1172 pod_ready.go:92] pod "coredns-7db6d8ff4d-7l6v2" in "kube-system" namespace has status "Ready":"True"
	I0807 20:03:02.017670    1172 pod_ready.go:81] duration metric: took 14.5198049s for pod "coredns-7db6d8ff4d-7l6v2" in "kube-system" namespace to be "Ready" ...
	I0807 20:03:02.017670    1172 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-116700" in "kube-system" namespace to be "Ready" ...
	I0807 20:03:02.017670    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-116700
	I0807 20:03:02.017670    1172 round_trippers.go:469] Request Headers:
	I0807 20:03:02.017670    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:03:02.017670    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:03:02.020674    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:03:02.020674    1172 round_trippers.go:577] Response Headers:
	I0807 20:03:02.020674    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:03:02 GMT
	I0807 20:03:02.020674    1172 round_trippers.go:580]     Audit-Id: cd777aa6-9437-407b-af59-45654df48fb7
	I0807 20:03:02.020674    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:03:02.020674    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:03:02.020674    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:03:02.020674    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:03:02.020674    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-116700","namespace":"kube-system","uid":"822f1e63-7c8a-4172-927c-32f4e0b5d505","resourceVersion":"1992","creationTimestamp":"2024-08-07T20:02:27Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.28.226.95:2379","kubernetes.io/config.hash":"9eecaca34ea754a7954ea8f568cb96d3","kubernetes.io/config.mirror":"9eecaca34ea754a7954ea8f568cb96d3","kubernetes.io/config.seen":"2024-08-07T20:02:20.493455845Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-07T20:02:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6160 chars]
	I0807 20:03:02.020674    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:03:02.021793    1172 round_trippers.go:469] Request Headers:
	I0807 20:03:02.021793    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:03:02.021828    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:03:02.024423    1172 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 20:03:02.024423    1172 round_trippers.go:577] Response Headers:
	I0807 20:03:02.024423    1172 round_trippers.go:580]     Audit-Id: 12aa9d7c-bc74-476a-bda2-43dfad5e450f
	I0807 20:03:02.024423    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:03:02.024423    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:03:02.024423    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:03:02.024423    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:03:02.024423    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:03:02 GMT
	I0807 20:03:02.025412    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"2010","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0807 20:03:02.025412    1172 pod_ready.go:92] pod "etcd-multinode-116700" in "kube-system" namespace has status "Ready":"True"
	I0807 20:03:02.025412    1172 pod_ready.go:81] duration metric: took 7.7428ms for pod "etcd-multinode-116700" in "kube-system" namespace to be "Ready" ...
	I0807 20:03:02.025412    1172 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-116700" in "kube-system" namespace to be "Ready" ...
	I0807 20:03:02.025412    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-116700
	I0807 20:03:02.025412    1172 round_trippers.go:469] Request Headers:
	I0807 20:03:02.025412    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:03:02.025412    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:03:02.029431    1172 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 20:03:02.029431    1172 round_trippers.go:577] Response Headers:
	I0807 20:03:02.030413    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:03:02.030413    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:03:02 GMT
	I0807 20:03:02.030413    1172 round_trippers.go:580]     Audit-Id: 8e373379-5950-47e8-a440-824a1c6e4524
	I0807 20:03:02.030413    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:03:02.030413    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:03:02.030413    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:03:02.030413    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-116700","namespace":"kube-system","uid":"5111ea6a-eb9d-4e60-bbc5-698a5882a60a","resourceVersion":"1970","creationTimestamp":"2024-08-07T20:02:28Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.28.226.95:8443","kubernetes.io/config.hash":"8066c637edc34431d2657878d0b69f79","kubernetes.io/config.mirror":"8066c637edc34431d2657878d0b69f79","kubernetes.io/config.seen":"2024-08-07T20:02:20.432683231Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-07T20:02:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7695 chars]
	I0807 20:03:02.031525    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:03:02.031557    1172 round_trippers.go:469] Request Headers:
	I0807 20:03:02.031557    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:03:02.031557    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:03:02.034250    1172 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 20:03:02.034250    1172 round_trippers.go:577] Response Headers:
	I0807 20:03:02.034250    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:03:02.034250    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:03:02.034902    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:03:02 GMT
	I0807 20:03:02.034902    1172 round_trippers.go:580]     Audit-Id: a3d39749-5260-4a2f-b7e4-abb93333a3cc
	I0807 20:03:02.034902    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:03:02.034902    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:03:02.035307    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"2010","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0807 20:03:02.035730    1172 pod_ready.go:92] pod "kube-apiserver-multinode-116700" in "kube-system" namespace has status "Ready":"True"
	I0807 20:03:02.035730    1172 pod_ready.go:81] duration metric: took 10.3169ms for pod "kube-apiserver-multinode-116700" in "kube-system" namespace to be "Ready" ...
	I0807 20:03:02.035730    1172 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-116700" in "kube-system" namespace to be "Ready" ...
	I0807 20:03:02.035730    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-116700
	I0807 20:03:02.035730    1172 round_trippers.go:469] Request Headers:
	I0807 20:03:02.035730    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:03:02.035730    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:03:02.038314    1172 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 20:03:02.038314    1172 round_trippers.go:577] Response Headers:
	I0807 20:03:02.038314    1172 round_trippers.go:580]     Audit-Id: a277fb69-5905-4e8f-bc9f-895997a657a5
	I0807 20:03:02.038314    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:03:02.038314    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:03:02.038314    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:03:02.038314    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:03:02.038314    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:03:02 GMT
	I0807 20:03:02.041299    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-116700","namespace":"kube-system","uid":"4d2e8250-9b12-4277-8834-515c1621fc78","resourceVersion":"1960","creationTimestamp":"2024-08-07T19:37:39Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ef62d358a9b469de2443e4a4f620921d","kubernetes.io/config.mirror":"ef62d358a9b469de2443e4a4f620921d","kubernetes.io/config.seen":"2024-08-07T19:37:39.552053960Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7470 chars]
	I0807 20:03:02.042746    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:03:02.042804    1172 round_trippers.go:469] Request Headers:
	I0807 20:03:02.042804    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:03:02.042804    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:03:02.045319    1172 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 20:03:02.045632    1172 round_trippers.go:577] Response Headers:
	I0807 20:03:02.045632    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:03:02.045632    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:03:02.045632    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:03:02.045632    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:03:02.045632    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:03:02 GMT
	I0807 20:03:02.045632    1172 round_trippers.go:580]     Audit-Id: 5b954eda-be05-4299-8f48-046b0ac1561a
	I0807 20:03:02.045632    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"2010","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0807 20:03:02.046537    1172 pod_ready.go:92] pod "kube-controller-manager-multinode-116700" in "kube-system" namespace has status "Ready":"True"
	I0807 20:03:02.046615    1172 pod_ready.go:81] duration metric: took 10.885ms for pod "kube-controller-manager-multinode-116700" in "kube-system" namespace to be "Ready" ...
	I0807 20:03:02.046615    1172 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4lnjd" in "kube-system" namespace to be "Ready" ...
	I0807 20:03:02.046697    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4lnjd
	I0807 20:03:02.046796    1172 round_trippers.go:469] Request Headers:
	I0807 20:03:02.046796    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:03:02.046827    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:03:02.050166    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:03:02.050166    1172 round_trippers.go:577] Response Headers:
	I0807 20:03:02.050166    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:03:02.050166    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:03:02.050166    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:03:02.050166    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:03:02.050166    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:03:02 GMT
	I0807 20:03:02.050166    1172 round_trippers.go:580]     Audit-Id: 9ba0a788-5cec-4a48-946a-54e9ebcce385
	I0807 20:03:02.050166    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-4lnjd","generateName":"kube-proxy-","namespace":"kube-system","uid":"254c1a93-f57b-4997-a3a1-d5f145f7c549","resourceVersion":"1843","creationTimestamp":"2024-08-07T19:46:10Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d82856c0-3330-4ab9-b7bf-54ed48646bce","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:46:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d82856c0-3330-4ab9-b7bf-54ed48646bce\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6067 chars]
	I0807 20:03:02.051211    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700-m03
	I0807 20:03:02.051277    1172 round_trippers.go:469] Request Headers:
	I0807 20:03:02.051313    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:03:02.051340    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:03:02.054188    1172 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 20:03:02.055057    1172 round_trippers.go:577] Response Headers:
	I0807 20:03:02.055057    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:03:02 GMT
	I0807 20:03:02.055057    1172 round_trippers.go:580]     Audit-Id: 200816e2-1bfa-4f25-b8ec-01d896a5a1f0
	I0807 20:03:02.055057    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:03:02.055057    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:03:02.055057    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:03:02.055057    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:03:02.055308    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700-m03","uid":"9ade310d-2eba-4d92-8b38-64ccda5e080c","resourceVersion":"2012","creationTimestamp":"2024-08-07T19:57:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_07T19_57_34_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:57:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4400 chars]
	I0807 20:03:02.055634    1172 pod_ready.go:97] node "multinode-116700-m03" hosting pod "kube-proxy-4lnjd" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-116700-m03" has status "Ready":"Unknown"
	I0807 20:03:02.055634    1172 pod_ready.go:81] duration metric: took 9.0194ms for pod "kube-proxy-4lnjd" in "kube-system" namespace to be "Ready" ...
	E0807 20:03:02.055634    1172 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-116700-m03" hosting pod "kube-proxy-4lnjd" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-116700-m03" has status "Ready":"Unknown"
	I0807 20:03:02.055634    1172 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fmjt9" in "kube-system" namespace to be "Ready" ...
	I0807 20:03:02.208640    1172 request.go:629] Waited for 152.9204ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fmjt9
	I0807 20:03:02.208946    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fmjt9
	I0807 20:03:02.208946    1172 round_trippers.go:469] Request Headers:
	I0807 20:03:02.208946    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:03:02.208946    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:03:02.212345    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:03:02.212345    1172 round_trippers.go:577] Response Headers:
	I0807 20:03:02.213368    1172 round_trippers.go:580]     Audit-Id: 5da7ba5d-119a-4259-a9e5-8b876f17c7b7
	I0807 20:03:02.213368    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:03:02.213402    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:03:02.213402    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:03:02.213402    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:03:02.213402    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:03:02 GMT
	I0807 20:03:02.213482    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-fmjt9","generateName":"kube-proxy-","namespace":"kube-system","uid":"766df91e-8fd0-457b-8c11-8810059ca4d9","resourceVersion":"1952","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d82856c0-3330-4ab9-b7bf-54ed48646bce","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d82856c0-3330-4ab9-b7bf-54ed48646bce\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6034 chars]
	I0807 20:03:02.410781    1172 request.go:629] Waited for 196.5313ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:03:02.411097    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:03:02.411097    1172 round_trippers.go:469] Request Headers:
	I0807 20:03:02.411097    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:03:02.411097    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:03:02.413710    1172 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 20:03:02.413710    1172 round_trippers.go:577] Response Headers:
	I0807 20:03:02.414465    1172 round_trippers.go:580]     Audit-Id: 03053c05-0c5c-4156-9b9d-9a6521f1e111
	I0807 20:03:02.414465    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:03:02.414465    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:03:02.414465    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:03:02.414465    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:03:02.414465    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:03:02 GMT
	I0807 20:03:02.414832    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"2010","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0807 20:03:02.415427    1172 pod_ready.go:92] pod "kube-proxy-fmjt9" in "kube-system" namespace has status "Ready":"True"
	I0807 20:03:02.415526    1172 pod_ready.go:81] duration metric: took 359.8876ms for pod "kube-proxy-fmjt9" in "kube-system" namespace to be "Ready" ...
	I0807 20:03:02.415526    1172 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vcb7n" in "kube-system" namespace to be "Ready" ...
	I0807 20:03:02.613599    1172 request.go:629] Waited for 197.9983ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vcb7n
	I0807 20:03:02.613599    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vcb7n
	I0807 20:03:02.613846    1172 round_trippers.go:469] Request Headers:
	I0807 20:03:02.613846    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:03:02.613846    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:03:02.618143    1172 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 20:03:02.618143    1172 round_trippers.go:577] Response Headers:
	I0807 20:03:02.618143    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:03:02.619156    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:03:02.619156    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:03:02.619156    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:03:02.619183    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:03:02 GMT
	I0807 20:03:02.619183    1172 round_trippers.go:580]     Audit-Id: 157478b2-c889-4f1d-9e0c-1388ce8e9c9b
	I0807 20:03:02.619356    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-vcb7n","generateName":"kube-proxy-","namespace":"kube-system","uid":"d8d87ad6-19cc-45fa-8c9f-1a862fec4e59","resourceVersion":"661","creationTimestamp":"2024-08-07T19:41:07Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d82856c0-3330-4ab9-b7bf-54ed48646bce","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:41:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d82856c0-3330-4ab9-b7bf-54ed48646bce\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5836 chars]
	I0807 20:03:02.816583    1172 request.go:629] Waited for 196.3547ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.226.95:8443/api/v1/nodes/multinode-116700-m02
	I0807 20:03:02.816583    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700-m02
	I0807 20:03:02.816824    1172 round_trippers.go:469] Request Headers:
	I0807 20:03:02.816824    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:03:02.816824    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:03:02.820114    1172 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 20:03:02.820190    1172 round_trippers.go:577] Response Headers:
	I0807 20:03:02.820190    1172 round_trippers.go:580]     Audit-Id: d642448a-b7cd-41f5-a272-2aaa5e7a1c22
	I0807 20:03:02.820190    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:03:02.820190    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:03:02.820190    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:03:02.820190    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:03:02.820190    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:03:02 GMT
	I0807 20:03:02.820543    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700-m02","uid":"95fa38ae-e99d-47d4-a12c-06eb42d66eb3","resourceVersion":"1754","creationTimestamp":"2024-08-07T19:41:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_07T19_41_08_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:41:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3826 chars]
	I0807 20:03:02.821167    1172 pod_ready.go:92] pod "kube-proxy-vcb7n" in "kube-system" namespace has status "Ready":"True"
	I0807 20:03:02.821167    1172 pod_ready.go:81] duration metric: took 405.6359ms for pod "kube-proxy-vcb7n" in "kube-system" namespace to be "Ready" ...
	I0807 20:03:02.821167    1172 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-116700" in "kube-system" namespace to be "Ready" ...
	I0807 20:03:03.019604    1172 request.go:629] Waited for 198.3351ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-116700
	I0807 20:03:03.019845    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-116700
	I0807 20:03:03.020098    1172 round_trippers.go:469] Request Headers:
	I0807 20:03:03.020179    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:03:03.020522    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:03:03.025162    1172 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 20:03:03.025162    1172 round_trippers.go:577] Response Headers:
	I0807 20:03:03.025860    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:03:03.025860    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:03:03.025860    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:03:03.025860    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:03:03 GMT
	I0807 20:03:03.025860    1172 round_trippers.go:580]     Audit-Id: 373f633d-00b8-4723-b5d1-57e5fa7fb3e3
	I0807 20:03:03.025860    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:03:03.026273    1172 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-116700","namespace":"kube-system","uid":"7b6df7b7-8c94-498a-bc4c-74d72efd572a","resourceVersion":"1996","creationTimestamp":"2024-08-07T19:37:39Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"fde91c95fce6faff219ccfa4b0b2484c","kubernetes.io/config.mirror":"fde91c95fce6faff219ccfa4b0b2484c","kubernetes.io/config.seen":"2024-08-07T19:37:39.552047359Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5200 chars]
	I0807 20:03:03.207819    1172 request.go:629] Waited for 180.795ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:03:03.207819    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes/multinode-116700
	I0807 20:03:03.207819    1172 round_trippers.go:469] Request Headers:
	I0807 20:03:03.207819    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:03:03.207819    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:03:03.212461    1172 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 20:03:03.212461    1172 round_trippers.go:577] Response Headers:
	I0807 20:03:03.212461    1172 round_trippers.go:580]     Audit-Id: 1976ae5c-cb83-4f09-9992-eaf24de0b5c0
	I0807 20:03:03.212461    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:03:03.212845    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:03:03.212845    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:03:03.212845    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:03:03.212845    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:03:03 GMT
	I0807 20:03:03.213233    1172 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"2010","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-07T19:37:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0807 20:03:03.214072    1172 pod_ready.go:92] pod "kube-scheduler-multinode-116700" in "kube-system" namespace has status "Ready":"True"
	I0807 20:03:03.214170    1172 pod_ready.go:81] duration metric: took 392.9977ms for pod "kube-scheduler-multinode-116700" in "kube-system" namespace to be "Ready" ...
	I0807 20:03:03.214170    1172 pod_ready.go:38] duration metric: took 15.7317317s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0807 20:03:03.214267    1172 api_server.go:52] waiting for apiserver process to appear ...
	I0807 20:03:03.228733    1172 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0807 20:03:03.258795    1172 command_runner.go:130] > 1971
	I0807 20:03:03.258795    1172 api_server.go:72] duration metric: took 31.5985841s to wait for apiserver process to appear ...
	I0807 20:03:03.258795    1172 api_server.go:88] waiting for apiserver healthz status ...
	I0807 20:03:03.258962    1172 api_server.go:253] Checking apiserver healthz at https://172.28.226.95:8443/healthz ...
	I0807 20:03:03.266616    1172 api_server.go:279] https://172.28.226.95:8443/healthz returned 200:
	ok
	I0807 20:03:03.266900    1172 round_trippers.go:463] GET https://172.28.226.95:8443/version
	I0807 20:03:03.266946    1172 round_trippers.go:469] Request Headers:
	I0807 20:03:03.266946    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:03:03.266978    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:03:03.268783    1172 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0807 20:03:03.268783    1172 round_trippers.go:577] Response Headers:
	I0807 20:03:03.268783    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:03:03.269325    1172 round_trippers.go:580]     Content-Length: 263
	I0807 20:03:03.269325    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:03:03 GMT
	I0807 20:03:03.269325    1172 round_trippers.go:580]     Audit-Id: f4a30aa2-293b-493a-93e5-1d6e247793fc
	I0807 20:03:03.269325    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:03:03.269325    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:03:03.269325    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:03:03.269325    1172 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.3",
	  "gitCommit": "6fc0a69044f1ac4c13841ec4391224a2df241460",
	  "gitTreeState": "clean",
	  "buildDate": "2024-07-16T23:48:12Z",
	  "goVersion": "go1.22.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0807 20:03:03.269566    1172 api_server.go:141] control plane version: v1.30.3
	I0807 20:03:03.269608    1172 api_server.go:131] duration metric: took 10.6457ms to wait for apiserver health ...
	I0807 20:03:03.269608    1172 system_pods.go:43] waiting for kube-system pods to appear ...
	I0807 20:03:03.411965    1172 request.go:629] Waited for 142.1045ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods
	I0807 20:03:03.412241    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods
	I0807 20:03:03.412241    1172 round_trippers.go:469] Request Headers:
	I0807 20:03:03.412337    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:03:03.412410    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:03:03.418723    1172 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0807 20:03:03.419648    1172 round_trippers.go:577] Response Headers:
	I0807 20:03:03.419648    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:03:03 GMT
	I0807 20:03:03.419648    1172 round_trippers.go:580]     Audit-Id: 535244d1-8f7f-4c0f-a8e7-4c7e55c46053
	I0807 20:03:03.419648    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:03:03.419648    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:03:03.419648    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:03:03.419648    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:03:03.421579    1172 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"2039"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-7l6v2","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7de73f9c-93d9-46c6-ae10-b253dd257a19","resourceVersion":"2034","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"26f3775d-afda-4408-b967-dc333bdd23fc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26f3775d-afda-4408-b967-dc333bdd23fc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86523 chars]
	I0807 20:03:03.425433    1172 system_pods.go:59] 12 kube-system pods found
	I0807 20:03:03.425433    1172 system_pods.go:61] "coredns-7db6d8ff4d-7l6v2" [7de73f9c-93d9-46c6-ae10-b253dd257a19] Running
	I0807 20:03:03.425433    1172 system_pods.go:61] "etcd-multinode-116700" [822f1e63-7c8a-4172-927c-32f4e0b5d505] Running
	I0807 20:03:03.425433    1172 system_pods.go:61] "kindnet-gk542" [bad4e2c3-505e-4175-9a5b-186a1874ff8d] Running
	I0807 20:03:03.425433    1172 system_pods.go:61] "kindnet-gsjlq" [7dac93b0-0cfa-4d64-a437-ce92de8bf57d] Running
	I0807 20:03:03.425433    1172 system_pods.go:61] "kindnet-kltmx" [b2ddfdd4-b957-45e3-b967-cf2650e86069] Running
	I0807 20:03:03.425433    1172 system_pods.go:61] "kube-apiserver-multinode-116700" [5111ea6a-eb9d-4e60-bbc5-698a5882a60a] Running
	I0807 20:03:03.425433    1172 system_pods.go:61] "kube-controller-manager-multinode-116700" [4d2e8250-9b12-4277-8834-515c1621fc78] Running
	I0807 20:03:03.425433    1172 system_pods.go:61] "kube-proxy-4lnjd" [254c1a93-f57b-4997-a3a1-d5f145f7c549] Running
	I0807 20:03:03.425433    1172 system_pods.go:61] "kube-proxy-fmjt9" [766df91e-8fd0-457b-8c11-8810059ca4d9] Running
	I0807 20:03:03.425433    1172 system_pods.go:61] "kube-proxy-vcb7n" [d8d87ad6-19cc-45fa-8c9f-1a862fec4e59] Running
	I0807 20:03:03.425433    1172 system_pods.go:61] "kube-scheduler-multinode-116700" [7b6df7b7-8c94-498a-bc4c-74d72efd572a] Running
	I0807 20:03:03.425433    1172 system_pods.go:61] "storage-provisioner" [8a8036f6-f1a0-4fca-b8dd-ed99c3535b47] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0807 20:03:03.425965    1172 system_pods.go:74] duration metric: took 156.3553ms to wait for pod list to return data ...
	I0807 20:03:03.425965    1172 default_sa.go:34] waiting for default service account to be created ...
	I0807 20:03:03.614865    1172 request.go:629] Waited for 188.4012ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.226.95:8443/api/v1/namespaces/default/serviceaccounts
	I0807 20:03:03.614865    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/default/serviceaccounts
	I0807 20:03:03.614865    1172 round_trippers.go:469] Request Headers:
	I0807 20:03:03.614865    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:03:03.614865    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:03:03.619263    1172 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 20:03:03.619676    1172 round_trippers.go:577] Response Headers:
	I0807 20:03:03.619676    1172 round_trippers.go:580]     Audit-Id: 2b4430e2-794e-4a1d-99de-30c7c4731427
	I0807 20:03:03.619676    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:03:03.619676    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:03:03.619676    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:03:03.619676    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:03:03.619676    1172 round_trippers.go:580]     Content-Length: 262
	I0807 20:03:03.619787    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:03:03 GMT
	I0807 20:03:03.619787    1172 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"2039"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"f9ade84e-dceb-49d5-8e06-66799b7c129c","resourceVersion":"345","creationTimestamp":"2024-08-07T19:37:52Z"}}]}
	I0807 20:03:03.620243    1172 default_sa.go:45] found service account: "default"
	I0807 20:03:03.620332    1172 default_sa.go:55] duration metric: took 194.2755ms for default service account to be created ...
	I0807 20:03:03.620332    1172 system_pods.go:116] waiting for k8s-apps to be running ...
	I0807 20:03:03.817757    1172 request.go:629] Waited for 197.0137ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods
	I0807 20:03:03.817858    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/namespaces/kube-system/pods
	I0807 20:03:03.817858    1172 round_trippers.go:469] Request Headers:
	I0807 20:03:03.817858    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:03:03.817858    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:03:03.825995    1172 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0807 20:03:03.825995    1172 round_trippers.go:577] Response Headers:
	I0807 20:03:03.826958    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:03:03.826958    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:03:03.826958    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:03:03.826958    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:03:03 GMT
	I0807 20:03:03.826958    1172 round_trippers.go:580]     Audit-Id: 037451bc-8c89-4251-80a8-fba82a981de3
	I0807 20:03:03.826958    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:03:03.828763    1172 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"2039"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-7l6v2","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7de73f9c-93d9-46c6-ae10-b253dd257a19","resourceVersion":"2034","creationTimestamp":"2024-08-07T19:37:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"26f3775d-afda-4408-b967-dc333bdd23fc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-07T19:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26f3775d-afda-4408-b967-dc333bdd23fc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86523 chars]
	I0807 20:03:03.834181    1172 system_pods.go:86] 12 kube-system pods found
	I0807 20:03:03.834181    1172 system_pods.go:89] "coredns-7db6d8ff4d-7l6v2" [7de73f9c-93d9-46c6-ae10-b253dd257a19] Running
	I0807 20:03:03.834181    1172 system_pods.go:89] "etcd-multinode-116700" [822f1e63-7c8a-4172-927c-32f4e0b5d505] Running
	I0807 20:03:03.834181    1172 system_pods.go:89] "kindnet-gk542" [bad4e2c3-505e-4175-9a5b-186a1874ff8d] Running
	I0807 20:03:03.834181    1172 system_pods.go:89] "kindnet-gsjlq" [7dac93b0-0cfa-4d64-a437-ce92de8bf57d] Running
	I0807 20:03:03.834181    1172 system_pods.go:89] "kindnet-kltmx" [b2ddfdd4-b957-45e3-b967-cf2650e86069] Running
	I0807 20:03:03.834181    1172 system_pods.go:89] "kube-apiserver-multinode-116700" [5111ea6a-eb9d-4e60-bbc5-698a5882a60a] Running
	I0807 20:03:03.834181    1172 system_pods.go:89] "kube-controller-manager-multinode-116700" [4d2e8250-9b12-4277-8834-515c1621fc78] Running
	I0807 20:03:03.834181    1172 system_pods.go:89] "kube-proxy-4lnjd" [254c1a93-f57b-4997-a3a1-d5f145f7c549] Running
	I0807 20:03:03.834181    1172 system_pods.go:89] "kube-proxy-fmjt9" [766df91e-8fd0-457b-8c11-8810059ca4d9] Running
	I0807 20:03:03.834181    1172 system_pods.go:89] "kube-proxy-vcb7n" [d8d87ad6-19cc-45fa-8c9f-1a862fec4e59] Running
	I0807 20:03:03.834181    1172 system_pods.go:89] "kube-scheduler-multinode-116700" [7b6df7b7-8c94-498a-bc4c-74d72efd572a] Running
	I0807 20:03:03.834181    1172 system_pods.go:89] "storage-provisioner" [8a8036f6-f1a0-4fca-b8dd-ed99c3535b47] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0807 20:03:03.834181    1172 system_pods.go:126] duration metric: took 213.8466ms to wait for k8s-apps to be running ...
	I0807 20:03:03.834181    1172 system_svc.go:44] waiting for kubelet service to be running ....
	I0807 20:03:03.848868    1172 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0807 20:03:03.875825    1172 system_svc.go:56] duration metric: took 41.6429ms WaitForService to wait for kubelet
	I0807 20:03:03.875825    1172 kubeadm.go:582] duration metric: took 32.2156058s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0807 20:03:03.875825    1172 node_conditions.go:102] verifying NodePressure condition ...
	I0807 20:03:04.005323    1172 request.go:629] Waited for 129.3724ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.226.95:8443/api/v1/nodes
	I0807 20:03:04.005593    1172 round_trippers.go:463] GET https://172.28.226.95:8443/api/v1/nodes
	I0807 20:03:04.005749    1172 round_trippers.go:469] Request Headers:
	I0807 20:03:04.005749    1172 round_trippers.go:473]     Accept: application/json, */*
	I0807 20:03:04.005749    1172 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0807 20:03:04.011071    1172 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 20:03:04.011899    1172 round_trippers.go:577] Response Headers:
	I0807 20:03:04.011899    1172 round_trippers.go:580]     Content-Type: application/json
	I0807 20:03:04.011899    1172 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4e01dc6-2cb3-45f7-b826-aa3947b4abf7
	I0807 20:03:04.011899    1172 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 79be1581-a7c9-4d14-862e-4cec0b4871b6
	I0807 20:03:04.011899    1172 round_trippers.go:580]     Date: Wed, 07 Aug 2024 20:03:04 GMT
	I0807 20:03:04.011899    1172 round_trippers.go:580]     Audit-Id: 7803b889-490f-4885-8672-d15f9f19f7aa
	I0807 20:03:04.011899    1172 round_trippers.go:580]     Cache-Control: no-cache, private
	I0807 20:03:04.012689    1172 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"2039"},"items":[{"metadata":{"name":"multinode-116700","uid":"2cc06e03-7354-4737-af43-066da9631b2e","resourceVersion":"2010","creationTimestamp":"2024-08-07T19:37:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-116700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e","minikube.k8s.io/name":"multinode-116700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_07T19_37_40_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15502 chars]
	I0807 20:03:04.013651    1172 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0807 20:03:04.013706    1172 node_conditions.go:123] node cpu capacity is 2
	I0807 20:03:04.013706    1172 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0807 20:03:04.013706    1172 node_conditions.go:123] node cpu capacity is 2
	I0807 20:03:04.013706    1172 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0807 20:03:04.013771    1172 node_conditions.go:123] node cpu capacity is 2
	I0807 20:03:04.013771    1172 node_conditions.go:105] duration metric: took 137.9447ms to run NodePressure ...
	I0807 20:03:04.013771    1172 start.go:241] waiting for startup goroutines ...
	I0807 20:03:04.013771    1172 start.go:246] waiting for cluster config update ...
	I0807 20:03:04.013771    1172 start.go:255] writing updated cluster config ...
	I0807 20:03:04.018293    1172 out.go:177] 
	I0807 20:03:04.021879    1172 config.go:182] Loaded profile config "ha-766300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 20:03:04.028607    1172 config.go:182] Loaded profile config "multinode-116700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 20:03:04.029218    1172 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\config.json ...
	I0807 20:03:04.035607    1172 out.go:177] * Starting "multinode-116700-m02" worker node in "multinode-116700" cluster
	I0807 20:03:04.037608    1172 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0807 20:03:04.037608    1172 cache.go:56] Caching tarball of preloaded images
	I0807 20:03:04.038771    1172 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0807 20:03:04.038946    1172 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0807 20:03:04.039004    1172 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\config.json ...
	I0807 20:03:04.041079    1172 start.go:360] acquireMachinesLock for multinode-116700-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 20:03:04.041079    1172 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-116700-m02"
	I0807 20:03:04.041079    1172 start.go:96] Skipping create...Using existing machine configuration
	I0807 20:03:04.041079    1172 fix.go:54] fixHost starting: m02
	I0807 20:03:04.042523    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700-m02 ).state
	I0807 20:03:06.270197    1172 main.go:141] libmachine: [stdout =====>] : Off
	
	I0807 20:03:06.271149    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:03:06.271249    1172 fix.go:112] recreateIfNeeded on multinode-116700-m02: state=Stopped err=<nil>
	W0807 20:03:06.271249    1172 fix.go:138] unexpected machine state, will restart: <nil>
	I0807 20:03:06.277623    1172 out.go:177] * Restarting existing hyperv VM for "multinode-116700-m02" ...
	I0807 20:03:06.280612    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-116700-m02
	I0807 20:03:09.506197    1172 main.go:141] libmachine: [stdout =====>] : 
	I0807 20:03:09.506197    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:03:09.506197    1172 main.go:141] libmachine: Waiting for host to start...
	I0807 20:03:09.506197    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700-m02 ).state
	I0807 20:03:11.872665    1172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 20:03:11.872665    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:03:11.872821    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 20:03:14.501913    1172 main.go:141] libmachine: [stdout =====>] : 
	I0807 20:03:14.501913    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:03:15.512262    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700-m02 ).state
	I0807 20:03:17.830256    1172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 20:03:17.830642    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:03:17.830642    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 20:03:20.520797    1172 main.go:141] libmachine: [stdout =====>] : 
	I0807 20:03:20.520953    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:03:21.522707    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700-m02 ).state
	I0807 20:03:23.798260    1172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 20:03:23.798260    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:03:23.798260    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 20:03:26.469273    1172 main.go:141] libmachine: [stdout =====>] : 
	I0807 20:03:26.469273    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:03:27.480907    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700-m02 ).state
	I0807 20:03:29.810988    1172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 20:03:29.810988    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:03:29.811777    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 20:03:32.435693    1172 main.go:141] libmachine: [stdout =====>] : 
	I0807 20:03:32.435693    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:03:33.450597    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700-m02 ).state
	I0807 20:03:35.767053    1172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 20:03:35.767053    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:03:35.767799    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 20:03:38.415674    1172 main.go:141] libmachine: [stdout =====>] : 172.28.235.119
	
	I0807 20:03:38.415674    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:03:38.418845    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700-m02 ).state
	I0807 20:03:40.702676    1172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 20:03:40.702676    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:03:40.702676    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 20:03:43.362301    1172 main.go:141] libmachine: [stdout =====>] : 172.28.235.119
	
	I0807 20:03:43.362301    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:03:43.362301    1172 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-116700\config.json ...
	I0807 20:03:43.365389    1172 machine.go:94] provisionDockerMachine start ...
	I0807 20:03:43.365503    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700-m02 ).state
	I0807 20:03:45.726529    1172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 20:03:45.726889    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:03:45.726889    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 20:03:48.521577    1172 main.go:141] libmachine: [stdout =====>] : 172.28.235.119
	
	I0807 20:03:48.522042    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:03:48.526893    1172 main.go:141] libmachine: Using SSH client type: native
	I0807 20:03:48.527755    1172 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.235.119 22 <nil> <nil>}
	I0807 20:03:48.527755    1172 main.go:141] libmachine: About to run SSH command:
	hostname
	I0807 20:03:48.658956    1172 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0807 20:03:48.659055    1172 buildroot.go:166] provisioning hostname "multinode-116700-m02"
	I0807 20:03:48.659055    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700-m02 ).state
	I0807 20:03:51.081900    1172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 20:03:51.081900    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:03:51.082286    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 20:03:53.881840    1172 main.go:141] libmachine: [stdout =====>] : 172.28.235.119
	
	I0807 20:03:53.882424    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:03:53.887608    1172 main.go:141] libmachine: Using SSH client type: native
	I0807 20:03:53.889272    1172 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.235.119 22 <nil> <nil>}
	I0807 20:03:53.889272    1172 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-116700-m02 && echo "multinode-116700-m02" | sudo tee /etc/hostname
	I0807 20:03:54.060253    1172 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-116700-m02
	
	I0807 20:03:54.060295    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700-m02 ).state
	I0807 20:03:56.391326    1172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 20:03:56.391326    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:03:56.392090    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 20:03:59.215553    1172 main.go:141] libmachine: [stdout =====>] : 172.28.235.119
	
	I0807 20:03:59.215553    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:03:59.222927    1172 main.go:141] libmachine: Using SSH client type: native
	I0807 20:03:59.223198    1172 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.235.119 22 <nil> <nil>}
	I0807 20:03:59.223198    1172 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-116700-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-116700-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-116700-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0807 20:03:59.376483    1172 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0807 20:03:59.376483    1172 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0807 20:03:59.376597    1172 buildroot.go:174] setting up certificates
	I0807 20:03:59.376597    1172 provision.go:84] configureAuth start
	I0807 20:03:59.376679    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700-m02 ).state
	I0807 20:04:01.733200    1172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 20:04:01.733200    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:04:01.733786    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 20:04:04.482829    1172 main.go:141] libmachine: [stdout =====>] : 172.28.235.119
	
	I0807 20:04:04.482829    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:04:04.482829    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700-m02 ).state
	I0807 20:04:06.802398    1172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 20:04:06.802841    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:04:06.802898    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 20:04:09.549409    1172 main.go:141] libmachine: [stdout =====>] : 172.28.235.119
	
	I0807 20:04:09.549409    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:04:09.549409    1172 provision.go:143] copyHostCerts
	I0807 20:04:09.549409    1172 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0807 20:04:09.550394    1172 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0807 20:04:09.550394    1172 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0807 20:04:09.550602    1172 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0807 20:04:09.551856    1172 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0807 20:04:09.551856    1172 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0807 20:04:09.551856    1172 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0807 20:04:09.552383    1172 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0807 20:04:09.553341    1172 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0807 20:04:09.553341    1172 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0807 20:04:09.553341    1172 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0807 20:04:09.553341    1172 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0807 20:04:09.554605    1172 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-116700-m02 san=[127.0.0.1 172.28.235.119 localhost minikube multinode-116700-m02]
	I0807 20:04:09.729169    1172 provision.go:177] copyRemoteCerts
	I0807 20:04:09.742026    1172 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0807 20:04:09.742026    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700-m02 ).state
	I0807 20:04:12.025197    1172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 20:04:12.025197    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:04:12.025197    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 20:04:14.699257    1172 main.go:141] libmachine: [stdout =====>] : 172.28.235.119
	
	I0807 20:04:14.699257    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:04:14.700474    1172 sshutil.go:53] new ssh client: &{IP:172.28.235.119 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-116700-m02\id_rsa Username:docker}
	I0807 20:04:14.802751    1172 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.0606608s)
	I0807 20:04:14.802751    1172 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0807 20:04:14.803252    1172 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0807 20:04:14.850202    1172 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0807 20:04:14.850294    1172 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0807 20:04:14.898812    1172 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0807 20:04:14.899393    1172 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0807 20:04:14.951577    1172 provision.go:87] duration metric: took 15.5747826s to configureAuth
	I0807 20:04:14.951577    1172 buildroot.go:189] setting minikube options for container-runtime
	I0807 20:04:14.952586    1172 config.go:182] Loaded profile config "multinode-116700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 20:04:14.952586    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700-m02 ).state
	I0807 20:04:17.228798    1172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 20:04:17.228798    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:04:17.229047    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 20:04:20.030388    1172 main.go:141] libmachine: [stdout =====>] : 172.28.235.119
	
	I0807 20:04:20.030732    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:04:20.037714    1172 main.go:141] libmachine: Using SSH client type: native
	I0807 20:04:20.038713    1172 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8faae0] 0x8fd6c0 <nil>  [] 0s} 172.28.235.119 22 <nil> <nil>}
	I0807 20:04:20.038713    1172 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0807 20:04:20.177841    1172 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0807 20:04:20.177841    1172 buildroot.go:70] root file system type: tmpfs
	I0807 20:04:20.177909    1172 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0807 20:04:20.178199    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700-m02 ).state
	I0807 20:04:22.513557    1172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 20:04:22.513997    1172 main.go:141] libmachine: [stderr =====>] : 
	I0807 20:04:22.514095    1172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700-m02 ).networkadapters[0]).ipaddresses[0]
	
	
	==> Docker <==
	Aug 07 20:02:59 multinode-116700 dockerd[1102]: time="2024-08-07T20:02:59.509174678Z" level=warning msg="cleaning up after shim disconnected" id=412bbaf2063ed41bf0b63f3d0e15206582aa892dbe7f29e8bf194bd40a6b28de namespace=moby
	Aug 07 20:02:59 multinode-116700 dockerd[1102]: time="2024-08-07T20:02:59.509189679Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 07 20:02:59 multinode-116700 dockerd[1093]: time="2024-08-07T20:02:59.512008095Z" level=info msg="ignoring event" container=412bbaf2063ed41bf0b63f3d0e15206582aa892dbe7f29e8bf194bd40a6b28de module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 07 20:02:59 multinode-116700 dockerd[1102]: time="2024-08-07T20:02:59.582326598Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 20:02:59 multinode-116700 dockerd[1102]: time="2024-08-07T20:02:59.582405999Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 20:02:59 multinode-116700 dockerd[1102]: time="2024-08-07T20:02:59.582418199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 20:02:59 multinode-116700 dockerd[1102]: time="2024-08-07T20:02:59.583747406Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 20:02:59 multinode-116700 cri-dockerd[1363]: time="2024-08-07T20:02:59Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1e20caf2f4e9c504e603c18651d5c82aa4328ebf8262709a0788f9fd75aefd5f/resolv.conf as [nameserver 172.28.224.1]"
	Aug 07 20:02:59 multinode-116700 dockerd[1102]: time="2024-08-07T20:02:59.888267414Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 20:02:59 multinode-116700 dockerd[1102]: time="2024-08-07T20:02:59.888386415Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 20:02:59 multinode-116700 dockerd[1102]: time="2024-08-07T20:02:59.888445315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 20:02:59 multinode-116700 dockerd[1102]: time="2024-08-07T20:02:59.888610516Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 20:03:00 multinode-116700 dockerd[1102]: time="2024-08-07T20:03:00.081870879Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 20:03:00 multinode-116700 dockerd[1102]: time="2024-08-07T20:03:00.082096679Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 20:03:00 multinode-116700 dockerd[1102]: time="2024-08-07T20:03:00.082121379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 20:03:00 multinode-116700 dockerd[1102]: time="2024-08-07T20:03:00.082241979Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 20:03:00 multinode-116700 cri-dockerd[1363]: time="2024-08-07T20:03:00Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/26d26caff42621d1bdcc8b8c1c1c1efed7d997b488d8412d0bbae721c36b4159/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Aug 07 20:03:00 multinode-116700 dockerd[1102]: time="2024-08-07T20:03:00.448112169Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 20:03:00 multinode-116700 dockerd[1102]: time="2024-08-07T20:03:00.448198969Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 20:03:00 multinode-116700 dockerd[1102]: time="2024-08-07T20:03:00.448218169Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 20:03:00 multinode-116700 dockerd[1102]: time="2024-08-07T20:03:00.448324369Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 20:03:12 multinode-116700 dockerd[1102]: time="2024-08-07T20:03:12.784767416Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 07 20:03:12 multinode-116700 dockerd[1102]: time="2024-08-07T20:03:12.786180934Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 07 20:03:12 multinode-116700 dockerd[1102]: time="2024-08-07T20:03:12.786320136Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 07 20:03:12 multinode-116700 dockerd[1102]: time="2024-08-07T20:03:12.786670740Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	865a1a2b9b61e       6e38f40d628db                                                                                         About a minute ago   Running             storage-provisioner       2                   140707f160bce       storage-provisioner
	6b3a06cbdf659       8c811b4aec35f                                                                                         About a minute ago   Running             busybox                   1                   26d26caff4262       busybox-fc5497c4f-s4njd
	0f34864af942e       cbb01a7bd410d                                                                                         About a minute ago   Running             coredns                   1                   1e20caf2f4e9c       coredns-7db6d8ff4d-7l6v2
	49ca5ec73eb91       917d7814b9b5b                                                                                         2 minutes ago        Running             kindnet-cni               1                   59331fc34e036       kindnet-kltmx
	412bbaf2063ed       6e38f40d628db                                                                                         2 minutes ago        Exited              storage-provisioner       1                   140707f160bce       storage-provisioner
	d5b7ce02c83d2       55bb025d2cfa5                                                                                         2 minutes ago        Running             kube-proxy                1                   eb510ae02c22e       kube-proxy-fmjt9
	99169adeba5f0       3861cfcd7c04c                                                                                         2 minutes ago        Running             etcd                      0                   d39c003ce0367       etcd-multinode-116700
	13567b0ad4221       1f6d574d502f3                                                                                         2 minutes ago        Running             kube-apiserver            0                   3f97652136d1a       kube-apiserver-multinode-116700
	4ea9e8ea04a51       3edc18e7b7672                                                                                         2 minutes ago        Running             kube-scheduler            1                   78154bc05bd7c       kube-scheduler-multinode-116700
	3ef1ad85d0901       76932a3b37d7e                                                                                         2 minutes ago        Running             kube-controller-manager   1                   8910c86ed899e       kube-controller-manager-multinode-116700
	4cb0f5f04f1c3       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   22 minutes ago       Exited              busybox                   0                   466d29d2ebc74       busybox-fc5497c4f-s4njd
	32f103de03d30       cbb01a7bd410d                                                                                         26 minutes ago       Exited              coredns                   0                   201691a17a928       coredns-7db6d8ff4d-7l6v2
	ec2579bb9d23c       kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3              26 minutes ago       Exited              kindnet-cni               0                   0877557fcf515       kindnet-kltmx
	3b896a77f5466       55bb025d2cfa5                                                                                         26 minutes ago       Exited              kube-proxy                0                   9fd565bc62073       kube-proxy-fmjt9
	1415d4256b4a2       3edc18e7b7672                                                                                         27 minutes ago       Exited              kube-scheduler            0                   1e5d82deee2fc       kube-scheduler-multinode-116700
	c50e3a9ac99f7       76932a3b37d7e                                                                                         27 minutes ago       Exited              kube-controller-manager   0                   3047b2dc6a149       kube-controller-manager-multinode-116700
	
	
	==> coredns [0f34864af942] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 61f4d0960164fdf8d8157aaa96d041acf5b29f3c98ba802d705114162ff9f2cc889bbb973f9b8023f3112734912ee6f4eadc4faa21115183d5697de30dae3805
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:35682 - 55662 "HINFO IN 1857510915862385215.5249397830961960416. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.079081976s
	
	
	==> coredns [32f103de03d3] <==
	[INFO] 10.244.0.3:50310 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000072801s
	[INFO] 10.244.0.3:40617 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000124501s
	[INFO] 10.244.0.3:49260 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000123802s
	[INFO] 10.244.0.3:53569 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000158302s
	[INFO] 10.244.0.3:46373 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000141702s
	[INFO] 10.244.0.3:45713 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000223603s
	[INFO] 10.244.0.3:33908 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000127102s
	[INFO] 10.244.1.2:40170 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108401s
	[INFO] 10.244.1.2:52007 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000168402s
	[INFO] 10.244.1.2:41791 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000184802s
	[INFO] 10.244.1.2:51153 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000444005s
	[INFO] 10.244.0.3:40520 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000232003s
	[INFO] 10.244.0.3:53668 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000213402s
	[INFO] 10.244.0.3:47531 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000282304s
	[INFO] 10.244.0.3:40942 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000122801s
	[INFO] 10.244.1.2:50193 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000186002s
	[INFO] 10.244.1.2:35238 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000111802s
	[INFO] 10.244.1.2:36248 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000084101s
	[INFO] 10.244.1.2:44351 - 5 "PTR IN 1.224.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000084301s
	[INFO] 10.244.0.3:34541 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000090901s
	[INFO] 10.244.0.3:50610 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000096301s
	[INFO] 10.244.0.3:37269 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000299303s
	[INFO] 10.244.0.3:35820 - 5 "PTR IN 1.224.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000089001s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-116700
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-116700
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e
	                    minikube.k8s.io/name=multinode-116700
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_07T19_37_40_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 07 Aug 2024 19:37:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-116700
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 07 Aug 2024 20:04:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 07 Aug 2024 20:02:47 +0000   Wed, 07 Aug 2024 19:37:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 07 Aug 2024 20:02:47 +0000   Wed, 07 Aug 2024 19:37:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 07 Aug 2024 20:02:47 +0000   Wed, 07 Aug 2024 19:37:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 07 Aug 2024 20:02:47 +0000   Wed, 07 Aug 2024 20:02:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.28.226.95
	  Hostname:    multinode-116700
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 002849ee5e2c4c6f80c184c3757c32de
	  System UUID:                f157be28-68de-9a48-8750-bc5dcec03341
	  Boot ID:                    7cf406c3-b0c3-439f-8372-a626e3a8b1c1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-s4njd                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 coredns-7db6d8ff4d-7l6v2                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     26m
	  kube-system                 etcd-multinode-116700                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         2m24s
	  kube-system                 kindnet-kltmx                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      26m
	  kube-system                 kube-apiserver-multinode-116700             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m23s
	  kube-system                 kube-controller-manager-multinode-116700    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-proxy-fmjt9                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 kube-scheduler-multinode-116700             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 26m                    kube-proxy       
	  Normal  Starting                 2m21s                  kube-proxy       
	  Normal  Starting                 27m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  27m (x8 over 27m)      kubelet          Node multinode-116700 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27m (x8 over 27m)      kubelet          Node multinode-116700 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27m (x7 over 27m)      kubelet          Node multinode-116700 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     27m                    kubelet          Node multinode-116700 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  27m                    kubelet          Node multinode-116700 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27m                    kubelet          Node multinode-116700 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  27m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 27m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           26m                    node-controller  Node multinode-116700 event: Registered Node multinode-116700 in Controller
	  Normal  NodeReady                26m                    kubelet          Node multinode-116700 status is now: NodeReady
	  Normal  Starting                 2m31s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m31s (x8 over 2m31s)  kubelet          Node multinode-116700 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m31s (x8 over 2m31s)  kubelet          Node multinode-116700 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m31s (x7 over 2m31s)  kubelet          Node multinode-116700 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m31s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m12s                  node-controller  Node multinode-116700 event: Registered Node multinode-116700 in Controller
	
	
	Name:               multinode-116700-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-116700-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e
	                    minikube.k8s.io/name=multinode-116700
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_07T19_41_08_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 07 Aug 2024 19:41:07 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-116700-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 07 Aug 2024 19:59:10 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 07 Aug 2024 19:57:56 +0000   Wed, 07 Aug 2024 20:03:19 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 07 Aug 2024 19:57:56 +0000   Wed, 07 Aug 2024 20:03:19 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 07 Aug 2024 19:57:56 +0000   Wed, 07 Aug 2024 20:03:19 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 07 Aug 2024 19:57:56 +0000   Wed, 07 Aug 2024 20:03:19 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  172.28.226.55
	  Hostname:    multinode-116700-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 c49ef1cc90f24b7ab5f81237ccd4f927
	  System UUID:                42521705-30fc-8045-86f4-7e91b71785af
	  Boot ID:                    73fa879f-0034-4d78-82fd-ae0e4a83f35e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-jpc88    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kindnet-gk542              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      23m
	  kube-system                 kube-proxy-vcb7n           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23m                kube-proxy       
	  Normal  RegisteredNode           23m                node-controller  Node multinode-116700-m02 event: Registered Node multinode-116700-m02 in Controller
	  Normal  NodeHasSufficientMemory  23m (x2 over 23m)  kubelet          Node multinode-116700-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23m (x2 over 23m)  kubelet          Node multinode-116700-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23m (x2 over 23m)  kubelet          Node multinode-116700-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                23m                kubelet          Node multinode-116700-m02 status is now: NodeReady
	  Normal  RegisteredNode           2m12s              node-controller  Node multinode-116700-m02 event: Registered Node multinode-116700-m02 in Controller
	  Normal  NodeNotReady             92s                node-controller  Node multinode-116700-m02 status is now: NodeNotReady
	
	
	Name:               multinode-116700-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-116700-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e
	                    minikube.k8s.io/name=multinode-116700
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_07T19_57_34_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 07 Aug 2024 19:57:33 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-116700-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 07 Aug 2024 19:58:44 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 07 Aug 2024 19:57:50 +0000   Wed, 07 Aug 2024 19:59:27 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 07 Aug 2024 19:57:50 +0000   Wed, 07 Aug 2024 19:59:27 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 07 Aug 2024 19:57:50 +0000   Wed, 07 Aug 2024 19:59:27 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 07 Aug 2024 19:57:50 +0000   Wed, 07 Aug 2024 19:59:27 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  172.28.226.146
	  Hostname:    multinode-116700-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 1da7b41e37e04f03a0fe4ba0e2784689
	  System UUID:                b737c2b9-0827-2647-9c66-717ab313ace1
	  Boot ID:                    56652ee7-e99f-4f83-ae13-cc64dde08257
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-gsjlq       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      18m
	  kube-system                 kube-proxy-4lnjd    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 18m                    kube-proxy       
	  Normal  Starting                 7m14s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  18m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  18m (x2 over 18m)      kubelet          Node multinode-116700-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x2 over 18m)      kubelet          Node multinode-116700-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x2 over 18m)      kubelet          Node multinode-116700-m03 status is now: NodeHasSufficientPID
	  Normal  NodeReady                18m                    kubelet          Node multinode-116700-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  7m18s (x2 over 7m18s)  kubelet          Node multinode-116700-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m18s (x2 over 7m18s)  kubelet          Node multinode-116700-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m18s (x2 over 7m18s)  kubelet          Node multinode-116700-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7m14s                  node-controller  Node multinode-116700-m03 event: Registered Node multinode-116700-m03 in Controller
	  Normal  NodeReady                7m1s                   kubelet          Node multinode-116700-m03 status is now: NodeReady
	  Normal  NodeNotReady             5m24s                  node-controller  Node multinode-116700-m03 status is now: NodeNotReady
	  Normal  RegisteredNode           2m12s                  node-controller  Node multinode-116700-m03 event: Registered Node multinode-116700-m03 in Controller
	
	
	==> dmesg <==
	[  +5.871130] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.341103] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	[  +1.263218] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +6.945905] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Aug 7 20:01] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.188851] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[Aug 7 20:02] systemd-fstab-generator[1020]: Ignoring "noauto" option for root device
	[  +0.120268] kauditd_printk_skb: 71 callbacks suppressed
	[  +0.570882] systemd-fstab-generator[1059]: Ignoring "noauto" option for root device
	[  +0.202759] systemd-fstab-generator[1071]: Ignoring "noauto" option for root device
	[  +0.240308] systemd-fstab-generator[1085]: Ignoring "noauto" option for root device
	[  +3.012474] systemd-fstab-generator[1316]: Ignoring "noauto" option for root device
	[  +0.212238] systemd-fstab-generator[1329]: Ignoring "noauto" option for root device
	[  +0.201858] systemd-fstab-generator[1340]: Ignoring "noauto" option for root device
	[  +0.301678] systemd-fstab-generator[1355]: Ignoring "noauto" option for root device
	[  +0.908520] systemd-fstab-generator[1479]: Ignoring "noauto" option for root device
	[  +0.106775] kauditd_printk_skb: 202 callbacks suppressed
	[  +4.134057] systemd-fstab-generator[1622]: Ignoring "noauto" option for root device
	[  +1.370494] kauditd_printk_skb: 44 callbacks suppressed
	[  +6.799097] kauditd_printk_skb: 30 callbacks suppressed
	[  +3.489800] systemd-fstab-generator[2441]: Ignoring "noauto" option for root device
	[  +7.412668] kauditd_printk_skb: 70 callbacks suppressed
	[Aug 7 20:03] kauditd_printk_skb: 15 callbacks suppressed
	
	
	==> etcd [99169adeba5f] <==
	{"level":"info","ts":"2024-08-07T20:02:22.813717Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"a5391e13896074eb","local-member-id":"56b8c59874c680","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-07T20:02:22.814101Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-07T20:02:22.821113Z","caller":"etcdserver/server.go:744","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"56b8c59874c680","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-08-07T20:02:22.825749Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-07T20:02:22.829721Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-07T20:02:22.82977Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-07T20:02:22.82941Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-07T20:02:22.830063Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"56b8c59874c680","initial-advertise-peer-urls":["https://172.28.226.95:2380"],"listen-peer-urls":["https://172.28.226.95:2380"],"advertise-client-urls":["https://172.28.226.95:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.28.226.95:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-07T20:02:22.83013Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-07T20:02:22.829444Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.28.226.95:2380"}
	{"level":"info","ts":"2024-08-07T20:02:22.8303Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.28.226.95:2380"}
	{"level":"info","ts":"2024-08-07T20:02:23.561632Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"56b8c59874c680 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-07T20:02:23.562017Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"56b8c59874c680 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-07T20:02:23.562285Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"56b8c59874c680 received MsgPreVoteResp from 56b8c59874c680 at term 2"}
	{"level":"info","ts":"2024-08-07T20:02:23.562524Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"56b8c59874c680 became candidate at term 3"}
	{"level":"info","ts":"2024-08-07T20:02:23.56273Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"56b8c59874c680 received MsgVoteResp from 56b8c59874c680 at term 3"}
	{"level":"info","ts":"2024-08-07T20:02:23.562959Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"56b8c59874c680 became leader at term 3"}
	{"level":"info","ts":"2024-08-07T20:02:23.563201Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 56b8c59874c680 elected leader 56b8c59874c680 at term 3"}
	{"level":"info","ts":"2024-08-07T20:02:23.580918Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"56b8c59874c680","local-member-attributes":"{Name:multinode-116700 ClientURLs:[https://172.28.226.95:2379]}","request-path":"/0/members/56b8c59874c680/attributes","cluster-id":"a5391e13896074eb","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-07T20:02:23.581134Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-07T20:02:23.581681Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-07T20:02:23.58173Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-07T20:02:23.581154Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-07T20:02:23.58686Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.28.226.95:2379"}
	{"level":"info","ts":"2024-08-07T20:02:23.594782Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 20:04:51 up 4 min,  0 users,  load average: 0.16, 0.15, 0.06
	Linux multinode-116700 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [49ca5ec73eb9] <==
	I0807 20:04:10.689641       1 main.go:322] Node multinode-116700-m03 has CIDR [10.244.3.0/24] 
	I0807 20:04:20.690881       1 main.go:295] Handling node with IPs: map[172.28.226.95:{}]
	I0807 20:04:20.690917       1 main.go:299] handling current node
	I0807 20:04:20.690934       1 main.go:295] Handling node with IPs: map[172.28.226.55:{}]
	I0807 20:04:20.690940       1 main.go:322] Node multinode-116700-m02 has CIDR [10.244.1.0/24] 
	I0807 20:04:20.691085       1 main.go:295] Handling node with IPs: map[172.28.226.146:{}]
	I0807 20:04:20.691124       1 main.go:322] Node multinode-116700-m03 has CIDR [10.244.3.0/24] 
	I0807 20:04:30.684693       1 main.go:295] Handling node with IPs: map[172.28.226.55:{}]
	I0807 20:04:30.684784       1 main.go:322] Node multinode-116700-m02 has CIDR [10.244.1.0/24] 
	I0807 20:04:30.684957       1 main.go:295] Handling node with IPs: map[172.28.226.146:{}]
	I0807 20:04:30.684989       1 main.go:322] Node multinode-116700-m03 has CIDR [10.244.3.0/24] 
	I0807 20:04:30.685070       1 main.go:295] Handling node with IPs: map[172.28.226.95:{}]
	I0807 20:04:30.685101       1 main.go:299] handling current node
	I0807 20:04:40.682520       1 main.go:295] Handling node with IPs: map[172.28.226.95:{}]
	I0807 20:04:40.682704       1 main.go:299] handling current node
	I0807 20:04:40.682725       1 main.go:295] Handling node with IPs: map[172.28.226.55:{}]
	I0807 20:04:40.682735       1 main.go:322] Node multinode-116700-m02 has CIDR [10.244.1.0/24] 
	I0807 20:04:40.682877       1 main.go:295] Handling node with IPs: map[172.28.226.146:{}]
	I0807 20:04:40.682911       1 main.go:322] Node multinode-116700-m03 has CIDR [10.244.3.0/24] 
	I0807 20:04:50.691485       1 main.go:295] Handling node with IPs: map[172.28.226.95:{}]
	I0807 20:04:50.691603       1 main.go:299] handling current node
	I0807 20:04:50.691657       1 main.go:295] Handling node with IPs: map[172.28.226.55:{}]
	I0807 20:04:50.691666       1 main.go:322] Node multinode-116700-m02 has CIDR [10.244.1.0/24] 
	I0807 20:04:50.691812       1 main.go:295] Handling node with IPs: map[172.28.226.146:{}]
	I0807 20:04:50.691823       1 main.go:322] Node multinode-116700-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [ec2579bb9d23] <==
	I0807 19:59:03.232963       1 main.go:322] Node multinode-116700-m03 has CIDR [10.244.3.0/24] 
	I0807 19:59:13.240662       1 main.go:295] Handling node with IPs: map[172.28.224.86:{}]
	I0807 19:59:13.240772       1 main.go:299] handling current node
	I0807 19:59:13.240794       1 main.go:295] Handling node with IPs: map[172.28.226.55:{}]
	I0807 19:59:13.240804       1 main.go:322] Node multinode-116700-m02 has CIDR [10.244.1.0/24] 
	I0807 19:59:13.240983       1 main.go:295] Handling node with IPs: map[172.28.226.146:{}]
	I0807 19:59:13.241028       1 main.go:322] Node multinode-116700-m03 has CIDR [10.244.3.0/24] 
	I0807 19:59:23.231790       1 main.go:295] Handling node with IPs: map[172.28.224.86:{}]
	I0807 19:59:23.231936       1 main.go:299] handling current node
	I0807 19:59:23.231974       1 main.go:295] Handling node with IPs: map[172.28.226.55:{}]
	I0807 19:59:23.232442       1 main.go:322] Node multinode-116700-m02 has CIDR [10.244.1.0/24] 
	I0807 19:59:23.232761       1 main.go:295] Handling node with IPs: map[172.28.226.146:{}]
	I0807 19:59:23.232775       1 main.go:322] Node multinode-116700-m03 has CIDR [10.244.3.0/24] 
	I0807 19:59:33.231986       1 main.go:295] Handling node with IPs: map[172.28.224.86:{}]
	I0807 19:59:33.232178       1 main.go:299] handling current node
	I0807 19:59:33.232231       1 main.go:295] Handling node with IPs: map[172.28.226.55:{}]
	I0807 19:59:33.232308       1 main.go:322] Node multinode-116700-m02 has CIDR [10.244.1.0/24] 
	I0807 19:59:33.232597       1 main.go:295] Handling node with IPs: map[172.28.226.146:{}]
	I0807 19:59:33.232685       1 main.go:322] Node multinode-116700-m03 has CIDR [10.244.3.0/24] 
	I0807 19:59:43.233010       1 main.go:295] Handling node with IPs: map[172.28.226.146:{}]
	I0807 19:59:43.233186       1 main.go:322] Node multinode-116700-m03 has CIDR [10.244.3.0/24] 
	I0807 19:59:43.233688       1 main.go:295] Handling node with IPs: map[172.28.224.86:{}]
	I0807 19:59:43.233708       1 main.go:299] handling current node
	I0807 19:59:43.233724       1 main.go:295] Handling node with IPs: map[172.28.226.55:{}]
	I0807 19:59:43.233730       1 main.go:322] Node multinode-116700-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [13567b0ad422] <==
	I0807 20:02:26.601248       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0807 20:02:26.602425       1 shared_informer.go:320] Caches are synced for configmaps
	I0807 20:02:26.602604       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0807 20:02:26.602900       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0807 20:02:26.604268       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0807 20:02:26.609313       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0807 20:02:26.613150       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0807 20:02:26.614018       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0807 20:02:26.614422       1 aggregator.go:165] initial CRD sync complete...
	I0807 20:02:26.614509       1 autoregister_controller.go:141] Starting autoregister controller
	I0807 20:02:26.614518       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0807 20:02:26.614525       1 cache.go:39] Caches are synced for autoregister controller
	I0807 20:02:26.619100       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0807 20:02:26.620725       1 policy_source.go:224] refreshing policies
	I0807 20:02:26.655875       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0807 20:02:27.410732       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0807 20:02:28.027790       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.28.224.86 172.28.226.95]
	I0807 20:02:28.029832       1 controller.go:615] quota admission added evaluator for: endpoints
	I0807 20:02:28.038552       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0807 20:02:29.538690       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0807 20:02:29.751359       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0807 20:02:29.768549       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0807 20:02:29.953954       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0807 20:02:30.005965       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0807 20:02:48.032238       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.28.226.95]
	
	
	==> kube-controller-manager [3ef1ad85d090] <==
	I0807 20:02:39.182448       1 shared_informer.go:320] Caches are synced for job
	I0807 20:02:39.208434       1 shared_informer.go:320] Caches are synced for disruption
	I0807 20:02:39.209269       1 shared_informer.go:320] Caches are synced for attach detach
	I0807 20:02:39.230401       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-116700"
	I0807 20:02:39.230890       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-116700-m02"
	I0807 20:02:39.231109       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-116700-m03"
	I0807 20:02:39.231816       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0807 20:02:39.233856       1 shared_informer.go:320] Caches are synced for stateful set
	I0807 20:02:39.236065       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="70.129815ms"
	I0807 20:02:39.237085       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="61.801µs"
	I0807 20:02:39.248371       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="82.757815ms"
	I0807 20:02:39.250928       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.101µs"
	I0807 20:02:39.296753       1 shared_informer.go:320] Caches are synced for resource quota
	I0807 20:02:39.355507       1 shared_informer.go:320] Caches are synced for resource quota
	I0807 20:02:39.731004       1 shared_informer.go:320] Caches are synced for garbage collector
	I0807 20:02:39.731158       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0807 20:02:39.808114       1 shared_informer.go:320] Caches are synced for garbage collector
	I0807 20:02:47.194764       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-116700-m02"
	I0807 20:03:00.393096       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="2.535799ms"
	I0807 20:03:01.674929       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.883352ms"
	I0807 20:03:01.676041       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.301µs"
	I0807 20:03:01.726215       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="41.168876ms"
	I0807 20:03:01.726341       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="53.801µs"
	I0807 20:03:19.335326       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="23.8809ms"
	I0807 20:03:19.337320       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="73.301µs"
	
	
	==> kube-controller-manager [c50e3a9ac99f] <==
	I0807 19:38:17.421298       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0807 19:41:07.170093       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-116700-m02\" does not exist"
	I0807 19:41:07.185316       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-116700-m02" podCIDRs=["10.244.1.0/24"]
	I0807 19:41:07.454154       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-116700-m02"
	I0807 19:41:40.538335       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-116700-m02"
	I0807 19:42:08.245298       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="81.57851ms"
	I0807 19:42:08.263355       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="17.757411ms"
	I0807 19:42:08.263438       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.4µs"
	I0807 19:42:08.280233       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="31µs"
	I0807 19:42:10.760509       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.696319ms"
	I0807 19:42:10.760870       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="280.004µs"
	I0807 19:42:11.047780       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.574392ms"
	I0807 19:42:11.048227       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.101µs"
	I0807 19:46:10.521620       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-116700-m03\" does not exist"
	I0807 19:46:10.521696       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-116700-m02"
	I0807 19:46:10.542212       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-116700-m03" podCIDRs=["10.244.2.0/24"]
	I0807 19:46:12.538550       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-116700-m03"
	I0807 19:46:39.397778       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-116700-m02"
	I0807 19:54:42.686460       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-116700-m02"
	I0807 19:57:27.551789       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-116700-m02"
	I0807 19:57:33.722865       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-116700-m03\" does not exist"
	I0807 19:57:33.723473       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-116700-m02"
	I0807 19:57:33.749316       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-116700-m03" podCIDRs=["10.244.3.0/24"]
	I0807 19:57:50.533000       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-116700-m02"
	I0807 19:59:27.818291       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-116700-m02"
	
	
	==> kube-proxy [3b896a77f546] <==
	I0807 19:37:55.892896       1 server_linux.go:69] "Using iptables proxy"
	I0807 19:37:55.906357       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.28.224.86"]
	I0807 19:37:55.960523       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0807 19:37:55.960664       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0807 19:37:55.960687       1 server_linux.go:165] "Using iptables Proxier"
	I0807 19:37:55.964705       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0807 19:37:55.965221       1 server.go:872] "Version info" version="v1.30.3"
	I0807 19:37:55.965238       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0807 19:37:55.966667       1 config.go:192] "Starting service config controller"
	I0807 19:37:55.966715       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0807 19:37:55.966748       1 config.go:101] "Starting endpoint slice config controller"
	I0807 19:37:55.966754       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0807 19:37:55.970324       1 config.go:319] "Starting node config controller"
	I0807 19:37:55.971420       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0807 19:37:56.067062       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0807 19:37:56.067134       1 shared_informer.go:320] Caches are synced for service config
	I0807 19:37:56.072467       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [d5b7ce02c83d] <==
	I0807 20:02:29.816876       1 server_linux.go:69] "Using iptables proxy"
	I0807 20:02:29.925858       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.28.226.95"]
	I0807 20:02:30.149018       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0807 20:02:30.149296       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0807 20:02:30.149320       1 server_linux.go:165] "Using iptables Proxier"
	I0807 20:02:30.154515       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0807 20:02:30.155336       1 server.go:872] "Version info" version="v1.30.3"
	I0807 20:02:30.155435       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0807 20:02:30.159381       1 config.go:192] "Starting service config controller"
	I0807 20:02:30.159664       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0807 20:02:30.159901       1 config.go:101] "Starting endpoint slice config controller"
	I0807 20:02:30.160190       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0807 20:02:30.162989       1 config.go:319] "Starting node config controller"
	I0807 20:02:30.166651       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0807 20:02:30.260785       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0807 20:02:30.260897       1 shared_informer.go:320] Caches are synced for service config
	I0807 20:02:30.266824       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [1415d4256b4a] <==
	W0807 19:37:37.217151       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0807 19:37:37.217434       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0807 19:37:37.275895       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0807 19:37:37.276164       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0807 19:37:37.355238       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0807 19:37:37.355363       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0807 19:37:37.371774       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0807 19:37:37.372551       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0807 19:37:37.382311       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0807 19:37:37.382673       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0807 19:37:37.471613       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0807 19:37:37.471897       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0807 19:37:37.535975       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0807 19:37:37.536122       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0807 19:37:37.562575       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0807 19:37:37.563626       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0807 19:37:37.617226       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0807 19:37:37.617453       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0807 19:37:37.669556       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0807 19:37:37.670249       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0807 19:37:40.152655       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0807 19:59:48.160975       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0807 19:59:48.161051       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0807 19:59:48.161384       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0807 19:59:48.161646       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [4ea9e8ea04a5] <==
	I0807 20:02:23.878470       1 serving.go:380] Generated self-signed cert in-memory
	W0807 20:02:26.518285       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0807 20:02:26.518430       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0807 20:02:26.518468       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0807 20:02:26.518713       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0807 20:02:26.568045       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0807 20:02:26.568314       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0807 20:02:26.574201       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0807 20:02:26.574474       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0807 20:02:26.577638       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0807 20:02:26.574932       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0807 20:02:26.679175       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 07 20:02:43 multinode-116700 kubelet[1629]: E0807 20:02:43.202350    1629 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Aug 07 20:02:43 multinode-116700 kubelet[1629]: E0807 20:02:43.202440    1629 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7de73f9c-93d9-46c6-ae10-b253dd257a19-config-volume podName:7de73f9c-93d9-46c6-ae10-b253dd257a19 nodeName:}" failed. No retries permitted until 2024-08-07 20:02:59.202423555 +0000 UTC m=+38.932617231 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7de73f9c-93d9-46c6-ae10-b253dd257a19-config-volume") pod "coredns-7db6d8ff4d-7l6v2" (UID: "7de73f9c-93d9-46c6-ae10-b253dd257a19") : object "kube-system"/"coredns" not registered
	Aug 07 20:02:43 multinode-116700 kubelet[1629]: E0807 20:02:43.303110    1629 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Aug 07 20:02:43 multinode-116700 kubelet[1629]: E0807 20:02:43.303201    1629 projected.go:200] Error preparing data for projected volume kube-api-access-4td6d for pod default/busybox-fc5497c4f-s4njd: object "default"/"kube-root-ca.crt" not registered
	Aug 07 20:02:43 multinode-116700 kubelet[1629]: E0807 20:02:43.303335    1629 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e89136fe-dd58-4e76-b6e8-4a71c0f51bbb-kube-api-access-4td6d podName:e89136fe-dd58-4e76-b6e8-4a71c0f51bbb nodeName:}" failed. No retries permitted until 2024-08-07 20:02:59.303319738 +0000 UTC m=+39.033513514 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-4td6d" (UniqueName: "kubernetes.io/projected/e89136fe-dd58-4e76-b6e8-4a71c0f51bbb-kube-api-access-4td6d") pod "busybox-fc5497c4f-s4njd" (UID: "e89136fe-dd58-4e76-b6e8-4a71c0f51bbb") : object "default"/"kube-root-ca.crt" not registered
	Aug 07 20:02:43 multinode-116700 kubelet[1629]: E0807 20:02:43.573804    1629 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7l6v2" podUID="7de73f9c-93d9-46c6-ae10-b253dd257a19"
	Aug 07 20:02:43 multinode-116700 kubelet[1629]: E0807 20:02:43.574228    1629 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-s4njd" podUID="e89136fe-dd58-4e76-b6e8-4a71c0f51bbb"
	Aug 07 20:02:45 multinode-116700 kubelet[1629]: E0807 20:02:45.573870    1629 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-7l6v2" podUID="7de73f9c-93d9-46c6-ae10-b253dd257a19"
	Aug 07 20:02:45 multinode-116700 kubelet[1629]: E0807 20:02:45.574200    1629 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-s4njd" podUID="e89136fe-dd58-4e76-b6e8-4a71c0f51bbb"
	Aug 07 20:03:00 multinode-116700 kubelet[1629]: I0807 20:03:00.306443    1629 scope.go:117] "RemoveContainer" containerID="b6325ae79a1456d29ea35428b22a76a19289acd4464bf278d7ed7df55d47929e"
	Aug 07 20:03:00 multinode-116700 kubelet[1629]: I0807 20:03:00.307026    1629 scope.go:117] "RemoveContainer" containerID="412bbaf2063ed41bf0b63f3d0e15206582aa892dbe7f29e8bf194bd40a6b28de"
	Aug 07 20:03:00 multinode-116700 kubelet[1629]: E0807 20:03:00.307224    1629 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8a8036f6-f1a0-4fca-b8dd-ed99c3535b47)\"" pod="kube-system/storage-provisioner" podUID="8a8036f6-f1a0-4fca-b8dd-ed99c3535b47"
	Aug 07 20:03:12 multinode-116700 kubelet[1629]: I0807 20:03:12.574074    1629 scope.go:117] "RemoveContainer" containerID="412bbaf2063ed41bf0b63f3d0e15206582aa892dbe7f29e8bf194bd40a6b28de"
	Aug 07 20:03:20 multinode-116700 kubelet[1629]: I0807 20:03:20.589873    1629 scope.go:117] "RemoveContainer" containerID="1dbaa8c7ed6927949af31d13d72499c894abc7f3a0c986acce07db6ed12f0629"
	Aug 07 20:03:20 multinode-116700 kubelet[1629]: E0807 20:03:20.613903    1629 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 07 20:03:20 multinode-116700 kubelet[1629]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 07 20:03:20 multinode-116700 kubelet[1629]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 07 20:03:20 multinode-116700 kubelet[1629]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 07 20:03:20 multinode-116700 kubelet[1629]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 07 20:03:20 multinode-116700 kubelet[1629]: I0807 20:03:20.642480    1629 scope.go:117] "RemoveContainer" containerID="c90df84145cbd39a097f020b7982091452d079c70e19411d258a2443fb447205"
	Aug 07 20:04:20 multinode-116700 kubelet[1629]: E0807 20:04:20.611804    1629 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 07 20:04:20 multinode-116700 kubelet[1629]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 07 20:04:20 multinode-116700 kubelet[1629]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 07 20:04:20 multinode-116700 kubelet[1629]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 07 20:04:20 multinode-116700 kubelet[1629]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0807 20:04:42.764680    8044 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-116700 -n multinode-116700
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-116700 -n multinode-116700: (12.6847266s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-116700 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (396.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (299.9s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-662200 --driver=hyperv
E0807 20:23:20.568422    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\client.crt: The system cannot find the path specified.
E0807 20:23:38.187833    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-100700\client.crt: The system cannot find the path specified.
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-662200 --driver=hyperv: exit status 1 (4m59.602912s)

                                                
                                                
-- stdout --
	* [NoKubernetes-662200] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4717 Build 19045.4717
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19389
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting "NoKubernetes-662200" primary control-plane node in "NoKubernetes-662200" cluster

                                                
                                                
-- /stdout --
** stderr ** 
	W0807 20:21:56.327070    1604 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p NoKubernetes-662200 --driver=hyperv" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-662200 -n NoKubernetes-662200
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-662200 -n NoKubernetes-662200: exit status 7 (294.0643ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	W0807 20:26:55.934462    5016 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-662200" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (299.90s)

                                                
                                    
x
+
TestPause/serial/Start (10800.478s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-033000 --memory=2048 --install-addons=false --wait=all --driver=hyperv
panic: test timed out after 3h0m0s
running tests:
	TestKubernetesUpgrade (7m44s)
	TestPause (52s)
	TestPause/serial (52s)
	TestPause/serial/Start (52s)
	TestRunningBinaryUpgrade (2m42s)
	TestStartStop (52s)
	TestStoppedBinaryUpgrade (2m51s)
	TestStoppedBinaryUpgrade/Upgrade (2m49s)

                                                
                                                
goroutine 2203 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2366 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:177 +0x2d

                                                
                                                
goroutine 1 [chan receive, 3 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc00063a820, 0xc00083bbb0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
testing.runTests(0xc000008600, {0x525f3e0, 0x2a, 0x2a}, {0x2eb2a2b?, 0xce80cf?, 0x5282840?})
	/usr/local/go/src/testing/testing.go:2159 +0x445
testing.(*M).Run(0xc0006c32c0)
	/usr/local/go/src/testing/testing.go:2027 +0x68b
k8s.io/minikube/test/integration.TestMain(0xc0006c32c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:131 +0x195

                                                
                                                
goroutine 13 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc000169f00)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 162 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc00129e3d0, 0x3b)
	/usr/local/go/src/runtime/sema.go:569 +0x15d
sync.(*Cond).Wait(0x2949080?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0013d11a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00129e400)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000208030, {0x3e93460, 0xc0006a63c0}, 0x1, 0xc000882240)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000208030, 0x3b9aca00, 0x0, 0x1, 0xc000882240)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 143
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2114 [chan receive, 3 minutes]:
testing.(*T).Run(0xc00088c340, {0x2e5a82c?, 0x3005753e800?}, 0xc0013be540)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStoppedBinaryUpgrade(0xc00088c340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:160 +0x2bc
testing.tRunner(0xc00088c340, 0x393b520)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 143 [chan receive, 172 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00129e400, 0xc000882240)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 141
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 678 [chan receive, 8 minutes]:
testing.(*testContext).waitParallel(0xc00092a4b0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0016384e0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0016384e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestDockerFlags(0xc0016384e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/docker_test.go:43 +0x105
testing.tRunner(0xc0016384e0, 0x393b400)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 73 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.130.1/klog.go:1141 +0x117
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 36
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.130.1/klog.go:1137 +0x171

                                                
                                                
goroutine 680 [chan receive, 8 minutes]:
testing.(*testContext).waitParallel(0xc00092a4b0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc001638820)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc001638820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestForceSystemdEnv(0xc001638820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/docker_test.go:146 +0x92
testing.tRunner(0xc001638820, 0x393b428)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 142 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0013d12c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 141
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 163 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3eb7150, 0xc000882240}, 0xc000871f50, 0xc000871f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3eb7150, 0xc000882240}, 0xa0?, 0xc000871f50, 0xc000871f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3eb7150?, 0xc000882240?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000871fd0?, 0xdbe4a4?, 0xc000918c60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 143
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2142 [syscall, 3 minutes, locked to thread]:
syscall.SyscallN(0x7ffdafd84e10?, {0xc00085b6a8?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x350, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc001c12a20)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0001ff080)
	/usr/local/go/src/os/exec/exec.go:901 +0x45
os/exec.(*Cmd).Run(0xc0001ff080)
	/usr/local/go/src/os/exec/exec.go:608 +0x2d
k8s.io/minikube/test/integration.Run(0xc000826b60, 0xc0001ff080)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestStoppedBinaryUpgrade.func2.1()
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:183 +0x385
github.com/cenkalti/backoff/v4.RetryNotifyWithTimer.Operation.withEmptyData.func1()
	/var/lib/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.3.0/retry.go:18 +0x13
github.com/cenkalti/backoff/v4.doRetryNotify[...](0xc00085bc20?, {0x3e9fa58, 0xc0013e6520}, 0x393c6d0, {0x0, 0x0?})
	/var/lib/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.3.0/retry.go:88 +0x132
github.com/cenkalti/backoff/v4.RetryNotifyWithTimer(0x0?, {0x3e9fa58?, 0xc0013e6520?}, 0x40?, {0x0?, 0x0?})
	/var/lib/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.3.0/retry.go:61 +0x5c
github.com/cenkalti/backoff/v4.RetryNotify(...)
	/var/lib/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.3.0/retry.go:49
k8s.io/minikube/pkg/util/retry.Expo(0xc000873e28, 0x3b9aca00, 0x1a3185c5000, {0xc000873d08?, 0x2949080?, 0xc7f2a8?})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/pkg/util/retry/retry.go:60 +0xef
k8s.io/minikube/test/integration.TestStoppedBinaryUpgrade.func2(0xc000826b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:188 +0x2de
testing.tRunner(0xc000826b60, 0xc0013be540)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2114
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2027 [chan receive]:
testing.(*T).Run(0xc00063b520, {0x2e57d76?, 0xd18c2e2800?}, 0xc000619290)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestPause(0xc00063b520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:41 +0x159
testing.tRunner(0xc00063b520, 0x393b4e8)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 164 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 163
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2037 [syscall, locked to thread]:
syscall.SyscallN(0xc47ec5?, {0xc001835b20?, 0x22b6730?, 0xc001835b58?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc3fdf6?, 0x530fc80?, 0xc001835bf8?, 0xc3283b?, 0x1350e050eb8?, 0x35?, 0xc28ba6?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x604, {0xc001413a83?, 0x57d, 0xce41df?}, 0xc0013e2788?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc001580508?, {0xc001413a83?, 0x800?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc001580508, {0xc001413a83, 0x57d, 0x57d})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00067a188, {0xc001413a83?, 0x13553871ea8?, 0x210?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc001440390, {0x3e92020, 0xc00067a2c8})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3e92160, 0xc001440390}, {0x3e92020, 0xc00067a2c8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3e92160, 0xc001440390})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc30c56?, {0x3e92160?, 0xc001440390?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3e92160, 0xc001440390}, {0x3e920e0, 0xc00067a188}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc001440780?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2115
	/usr/local/go/src/os/exec/exec.go:727 +0xa25

                                                
                                                
goroutine 676 [chan receive, 8 minutes]:
testing.(*testContext).waitParallel(0xc00092a4b0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0016381a0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0016381a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestCertOptions(0xc0016381a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/cert_options_test.go:36 +0x92
testing.tRunner(0xc0016381a0, 0x393b3f0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2197 [chan receive]:
testing.(*testContext).waitParallel(0xc00092a4b0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0008271e0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0008271e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0008271e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc0008271e0, 0xc001b3c2c0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2143
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2165 [select, 3 minutes]:
os/exec.(*Cmd).watchCtx(0xc00186e480, 0xc00148f380)
	/usr/local/go/src/os/exec/exec.go:768 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2081
	/usr/local/go/src/os/exec/exec.go:754 +0x9e9

                                                
                                                
goroutine 2038 [syscall, locked to thread]:
syscall.SyscallN(0xc47ec5?, {0xc0012dfb20?, 0x286a058?, 0xc0012dfb58?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc61a19?, 0x530fc80?, 0xc00020fa40?, 0xc0012dfd00?, 0xc9b7db?, 0xc00067a1b0?, 0x4c75?, 0x3e92020?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x5b0, {0xc0012d38d0?, 0x730, 0xce41df?}, 0x3?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc001580a08?, {0xc0012d38d0?, 0x0?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc001580a08, {0xc0012d38d0, 0x730, 0x730})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00067a1b0, {0xc0012d38d0?, 0x7bf3?, 0x7bf3?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0014403c0, {0x3e92020, 0xc0000a7250})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3e92160, 0xc0014403c0}, {0x3e92020, 0xc0000a7250}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3e92160, 0xc0014403c0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc30c56?, {0x3e92160?, 0xc0014403c0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3e92160, 0xc0014403c0}, {0x3e920e0, 0xc00067a1b0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc000054540?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2115
	/usr/local/go/src/os/exec/exec.go:727 +0xa25

                                                
                                                
goroutine 2198 [chan receive]:
testing.(*T).Run(0xc000827380, {0x2e56859?, 0x24?}, 0xc001b3c300)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestPause.func1(0xc000827380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:65 +0x1ee
testing.tRunner(0xc000827380, 0xc000619290)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2027
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1033 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc00129e6d0, 0x31)
	/usr/local/go/src/runtime/sema.go:569 +0x15d
sync.(*Cond).Wait(0x2949080?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0016cccc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00129e700)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00151a910, {0x3e93460, 0xc0012890b0}, 0x1, 0xc000882240)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00151a910, 0x3b9aca00, 0x0, 0x1, 0xc000882240)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 991
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2126 [select, 3 minutes]:
os/exec.(*Cmd).watchCtx(0xc0001ff080, 0xc000883200)
	/usr/local/go/src/os/exec/exec.go:768 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2142
	/usr/local/go/src/os/exec/exec.go:754 +0x9e9

                                                
                                                
goroutine 2125 [syscall, 3 minutes, locked to thread]:
syscall.SyscallN(0x0?, {0xc001431b20?, 0xc3283b?, 0x10?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x0?, 0xc001431bb0?, 0xc001431bf8?, 0xc3283b?, 0xc001431bf0?, 0xc943d1?, 0x0?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x2a0, {0xc0008d6400?, 0x200, 0x0?}, 0xc001431c28?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc0013e2008?, {0xc0008d6400?, 0x200?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc0013e2008, {0xc0008d6400, 0x200, 0x200})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc001474210, {0xc0008d6400?, 0x13553620e18?, 0x0?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc001519680, {0x3e92020, 0xc0004c6840})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3e92160, 0xc001519680}, {0x3e92020, 0xc0004c6840}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc001431e98?, {0x3e92160, 0xc001519680})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc2a19e?, {0x3e92160?, 0xc001519680?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3e92160, 0xc001519680}, {0x3e920e0, 0xc001474210}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0x0?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2142
	/usr/local/go/src/os/exec/exec.go:727 +0xa25

                                                
                                                
goroutine 2194 [chan receive]:
testing.(*testContext).waitParallel(0xc00092a4b0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000826d00)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000826d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc000826d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc000826d00, 0xc001b3c1c0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2143
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2124 [syscall, 3 minutes, locked to thread]:
syscall.SyscallN(0xc47ec5?, {0xc0008cdb20?, 0x231af08?, 0xc0008cdb58?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc3fdf6?, 0x530fc80?, 0xc0008cdbf8?, 0xc329a5?, 0x0?, 0x0?, 0xc000000000?, 0x3eb6e17?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x380, {0xc00166e309?, 0x4f7, 0xce41df?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc000ba7188?, {0xc00166e309?, 0x800?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc000ba7188, {0xc00166e309, 0x4f7, 0x4f7})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0014741d8, {0xc00166e309?, 0xc3283b?, 0x22f?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc001519650, {0x3e92020, 0xc001474238})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3e92160, 0xc001519650}, {0x3e92020, 0xc001474238}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0001fedd0?, {0x3e92160, 0xc001519650})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc000055920?, {0x3e92160?, 0xc001519650?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3e92160, 0xc001519650}, {0x3e920e0, 0xc0014741d8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc0001fed80?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2142
	/usr/local/go/src/os/exec/exec.go:727 +0xa25

                                                
                                                
goroutine 990 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0016ccf00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 989
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2163 [syscall, locked to thread]:
syscall.SyscallN(0xc47ec5?, {0xc001579b20?, 0x231af08?, 0xc001579b58?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc3fdf6?, 0x530fc80?, 0xc001579bf8?, 0xc3283b?, 0x1350e050eb8?, 0x6d762d2d20303041?, 0x6b696e696d2e736e?, 0x7070415c36656275?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x43c, {0xc00166f27b?, 0x585, 0xce41df?}, 0x79726f6d656d2d2d?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc001347408?, {0xc00166f27b?, 0x800?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc001347408, {0xc00166f27b, 0x585, 0x585})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0000a7308, {0xc00166f27b?, 0x1355390c9a8?, 0x23b?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc001926e10, {0x3e92020, 0xc0000a7398})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3e92160, 0xc001926e10}, {0x3e92020, 0xc0000a7398}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x6?, {0x3e92160, 0xc001926e10})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc001579eb8?, {0x3e92160?, 0xc001926e10?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3e92160, 0xc001926e10}, {0x3e920e0, 0xc0000a7308}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0x393b4f8?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2081
	/usr/local/go/src/os/exec/exec.go:727 +0xa25

                                                
                                                
goroutine 2164 [syscall, 3 minutes, locked to thread]:
syscall.SyscallN(0xc47ec5?, {0xc001433b20?, 0x231af08?, 0xc001433b58?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc3fdf6?, 0x530fc80?, 0xc001433bf8?, 0xc329a5?, 0x1350e050598?, 0xc0017bbf35?, 0x10?, 0x10?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x624, {0xc0018e6000?, 0x200, 0x0?}, 0xc0017ba9a0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc001347908?, {0xc0018e6000?, 0x200?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc001347908, {0xc0018e6000, 0x200, 0x200})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0000a7368, {0xc0018e6000?, 0xc001812438?, 0x0?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc001926e40, {0x3e92020, 0xc00050e4b8})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3e92160, 0xc001926e40}, {0x3e92020, 0xc00050e4b8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x10?, {0x3e92160, 0xc001926e40})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc001433eb8?, {0x3e92160?, 0xc001926e40?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3e92160, 0xc001926e40}, {0x3e920e0, 0xc0000a7368}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc0013be240?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2081
	/usr/local/go/src/os/exec/exec.go:727 +0xa25

                                                
                                                
goroutine 991 [chan receive, 136 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00129e700, 0xc000882240)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 989
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2195 [chan receive]:
testing.(*testContext).waitParallel(0xc00092a4b0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000826ea0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000826ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc000826ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc000826ea0, 0xc001b3c200)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2143
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 729 [IO wait, 159 minutes]:
internal/poll.runtime_pollWait(0x1355390caa0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc3fdf6?, 0x530fc80?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.execIO(0xc0003ecf20, 0xc001c8dbb0)
	/usr/local/go/src/internal/poll/fd_windows.go:175 +0xe6
internal/poll.(*FD).acceptOne(0xc0003ecf08, 0x33c, {0xc00034c3c0?, 0x0?, 0x0?}, 0xc000620808?)
	/usr/local/go/src/internal/poll/fd_windows.go:944 +0x67
internal/poll.(*FD).Accept(0xc0003ecf08, 0xc001c8dd90)
	/usr/local/go/src/internal/poll/fd_windows.go:978 +0x1bc
net.(*netFD).accept(0xc0003ecf08)
	/usr/local/go/src/net/fd_windows.go:178 +0x54
net.(*TCPListener).accept(0xc0003fc1c0)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc0003fc1c0)
	/usr/local/go/src/net/tcpsock.go:327 +0x30
net/http.(*Server).Serve(0xc00083e0f0, {0x3eaa240, 0xc0003fc1c0})
	/usr/local/go/src/net/http/server.go:3260 +0x33e
net/http.(*Server).ListenAndServe(0xc00083e0f0)
	/usr/local/go/src/net/http/server.go:3189 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xd?, 0xc001638ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2209 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 792
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2208 +0x129

                                                
                                                
goroutine 1034 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3eb7150, 0xc000882240}, 0xc001dddf50, 0xc001dddf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3eb7150, 0xc000882240}, 0xa0?, 0xc001dddf50, 0xc001dddf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3eb7150?, 0xc000882240?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc001dddfd0?, 0xdbe4a4?, 0xc000883260?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 991
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 1347 [chan send, 125 minutes]:
os/exec.(*Cmd).watchCtx(0xc0013f2000, 0xc001414120)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 865
	/usr/local/go/src/os/exec/exec.go:754 +0x9e9

                                                
                                                
goroutine 677 [chan receive, 8 minutes]:
testing.(*testContext).waitParallel(0xc00092a4b0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc001638340)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc001638340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestCertExpiration(0xc001638340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/cert_options_test.go:115 +0x39
testing.tRunner(0xc001638340, 0x393b3e8)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2145 [chan receive]:
testing.(*testContext).waitParallel(0xc00092a4b0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0008269c0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0008269c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0008269c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc0008269c0, 0xc001b3c180)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2143
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2079 [chan receive]:
testing.(*T).Run(0xc00063ba00, {0x2e56854?, 0xd773d3?}, 0x393b6f0)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop(0xc00063ba00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc00063ba00, 0x393b518)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2202 [select]:
os/exec.(*Cmd).watchCtx(0xc0013f2180, 0xc0008832c0)
	/usr/local/go/src/os/exec/exec.go:768 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2199
	/usr/local/go/src/os/exec/exec.go:754 +0x9e9

                                                
                                                
goroutine 2025 [chan receive, 8 minutes]:
testing.(*testContext).waitParallel(0xc00092a4b0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00063a9c0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00063a9c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc00063a9c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:47 +0x39
testing.tRunner(0xc00063a9c0, 0x393b4d0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1035 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 1034
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2081 [syscall, 3 minutes, locked to thread]:
syscall.SyscallN(0x7ffdafd84e10?, {0xc0018236c0?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x51c, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc0013f8ea0)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc00186e480)
	/usr/local/go/src/os/exec/exec.go:901 +0x45
os/exec.(*Cmd).Run(0xc00186e480)
	/usr/local/go/src/os/exec/exec.go:608 +0x2d
k8s.io/minikube/test/integration.Run(0xc00063bd40, 0xc00186e480)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestRunningBinaryUpgrade.func1()
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:120 +0x385
github.com/cenkalti/backoff/v4.RetryNotifyWithTimer.Operation.withEmptyData.func1()
	/var/lib/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.3.0/retry.go:18 +0x13
github.com/cenkalti/backoff/v4.doRetryNotify[...](0xc001823c38?, {0x3e9fa58, 0xc00082ca00}, 0x393c6d0, {0x0, 0x0?})
	/var/lib/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.3.0/retry.go:88 +0x132
github.com/cenkalti/backoff/v4.RetryNotifyWithTimer(0x0?, {0x3e9fa58?, 0xc00082ca00?}, 0x40?, {0x0?, 0x0?})
	/var/lib/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.3.0/retry.go:61 +0x5c
github.com/cenkalti/backoff/v4.RetryNotify(...)
	/var/lib/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.3.0/retry.go:49
k8s.io/minikube/pkg/util/retry.Expo(0xc001823e08, 0x3b9aca00, 0x1a3185c5000, {0xc001823d10?, 0x2949080?, 0x713a7?})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/pkg/util/retry/retry.go:60 +0xef
k8s.io/minikube/test/integration.TestRunningBinaryUpgrade(0xc00063bd40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:125 +0x4f4
testing.tRunner(0xc00063bd40, 0x393b4f8)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1031 [chan send, 136 minutes]:
os/exec.(*Cmd).watchCtx(0xc00083a600, 0xc001414540)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1030
	/usr/local/go/src/os/exec/exec.go:754 +0x9e9

                                                
                                                
goroutine 2196 [chan receive]:
testing.(*testContext).waitParallel(0xc00092a4b0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000827040)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000827040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc000827040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc000827040, 0xc001b3c240)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2143
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2200 [syscall, locked to thread]:
syscall.SyscallN(0xc47ec5?, {0xc0012e3b20?, 0x286a058?, 0xc0012e3b58?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc3fdf6?, 0x530fc80?, 0xc0012e3bf8?, 0xc3283b?, 0x1350e050598?, 0x35?, 0xc28ba6?, 0x530d7c0?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x6ac, {0xc000bba9e9?, 0x217, 0xce41df?}, 0x2eb23f7?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc001346788?, {0xc000bba9e9?, 0x400?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc001346788, {0xc000bba9e9, 0x217, 0x217})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0000a72c8, {0xc000bba9e9?, 0x13553690f08?, 0x68?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc000619aa0, {0x3e92020, 0xc00050e458})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3e92160, 0xc000619aa0}, {0x3e92020, 0xc00050e458}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3e92160, 0xc000619aa0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc30c56?, {0x3e92160?, 0xc000619aa0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3e92160, 0xc000619aa0}, {0x3e920e0, 0xc0000a72c8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc0000545a0?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2199
	/usr/local/go/src/os/exec/exec.go:727 +0xa25

                                                
                                                
goroutine 2115 [syscall, 8 minutes, locked to thread]:
syscall.SyscallN(0x7ffdafd84e10?, {0xc00085f798?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x5d0, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc0013f8780)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0013f2000)
	/usr/local/go/src/os/exec/exec.go:901 +0x45
os/exec.(*Cmd).Run(0xc0013f2000)
	/usr/local/go/src/os/exec/exec.go:608 +0x2d
k8s.io/minikube/test/integration.Run(0xc00088c680, 0xc0013f2000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestKubernetesUpgrade(0xc00088c680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:222 +0x375
testing.tRunner(0xc00088c680, 0x393b498)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2143 [chan receive]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc0008261a0, 0x393b6f0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 2079
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2201 [syscall, locked to thread]:
syscall.SyscallN(0xc0014b7d50?, {0xc0014b7b20?, 0x6?, 0x1e?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x1?, 0x9?, 0xc0014b7bf8?, 0xc3283b?, 0x1?, 0x1?, 0xc28ba6?, 0x1?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x64c, {0xc000bba53a?, 0x2c6, 0x0?}, 0xc00084bd68?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc001347688?, {0xc000bba53a?, 0x400?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc001347688, {0xc000bba53a, 0x2c6, 0x2c6})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0000a7318, {0xc000bba53a?, 0xc0017fdc00?, 0x13a?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc000619c50, {0x3e92020, 0xc00067a478})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3e92160, 0xc000619c50}, {0x3e92020, 0xc00067a478}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0014b7e78?, {0x3e92160, 0xc000619c50})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc0014b7f38?, {0x3e92160?, 0xc000619c50?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3e92160, 0xc000619c50}, {0x3e920e0, 0xc0000a7318}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc0009188a0?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2199
	/usr/local/go/src/os/exec/exec.go:727 +0xa25

                                                
                                                
goroutine 2039 [select, 8 minutes]:
os/exec.(*Cmd).watchCtx(0xc0013f2000, 0xc00148e120)
	/usr/local/go/src/os/exec/exec.go:768 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2115
	/usr/local/go/src/os/exec/exec.go:754 +0x9e9

                                                
                                                
goroutine 2199 [syscall, locked to thread]:
syscall.SyscallN(0x7ffdafd84e10?, {0xc001677a78?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x544, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc0013f8de0)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0013f2180)
	/usr/local/go/src/os/exec/exec.go:901 +0x45
os/exec.(*Cmd).Run(0xc0013f2180)
	/usr/local/go/src/os/exec/exec.go:608 +0x2d
k8s.io/minikube/test/integration.Run(0xc000827520, 0xc0013f2180)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateFreshStart({0x3eb6f90, 0xc000348310}, 0xc000827520, {0xc001343eb0, 0xc})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:80 +0x275
k8s.io/minikube/test/integration.TestPause.func1.1(0xc000827520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:66 +0x43
testing.tRunner(0xc000827520, 0xc001b3c300)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2198
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2144 [chan receive]:
testing.(*testContext).waitParallel(0xc00092a4b0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0008264e0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0008264e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0008264e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc0008264e0, 0xc001b3c140)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2143
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                    

Test pass (128/197)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 21.88
4 TestDownloadOnly/v1.20.0/preload-exists 0.08
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.5
9 TestDownloadOnly/v1.20.0/DeleteAll 1.34
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 1.21
12 TestDownloadOnly/v1.30.3/json-events 11.31
13 TestDownloadOnly/v1.30.3/preload-exists 0
16 TestDownloadOnly/v1.30.3/kubectl 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.3
18 TestDownloadOnly/v1.30.3/DeleteAll 1.29
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 1.33
21 TestDownloadOnly/v1.31.0-rc.0/json-events 18.76
22 TestDownloadOnly/v1.31.0-rc.0/preload-exists 0
25 TestDownloadOnly/v1.31.0-rc.0/kubectl 0
26 TestDownloadOnly/v1.31.0-rc.0/LogsDuration 0.3
27 TestDownloadOnly/v1.31.0-rc.0/DeleteAll 1.29
28 TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds 1.2
30 TestBinaryMirror 7.2
31 TestOffline 292.87
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.29
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.28
36 TestAddons/Setup 449.28
38 TestAddons/serial/Volcano 66.36
40 TestAddons/serial/GCPAuth/Namespaces 0.33
43 TestAddons/parallel/Ingress 73.3
44 TestAddons/parallel/InspektorGadget 27.85
45 TestAddons/parallel/MetricsServer 23.25
46 TestAddons/parallel/HelmTiller 31.35
48 TestAddons/parallel/CSI 93.53
49 TestAddons/parallel/Headlamp 41.71
50 TestAddons/parallel/CloudSpanner 22.44
51 TestAddons/parallel/LocalPath 35.98
52 TestAddons/parallel/NvidiaDevicePlugin 20.65
53 TestAddons/parallel/Yakd 26.93
54 TestAddons/StoppedEnableDisable 55.77
58 TestForceSystemdFlag 411.63
66 TestErrorSpam/start 17.75
67 TestErrorSpam/status 38.22
68 TestErrorSpam/pause 23.6
69 TestErrorSpam/unpause 23.73
70 TestErrorSpam/stop 58.83
73 TestFunctional/serial/CopySyncFile 0.04
74 TestFunctional/serial/StartWithProxy 250.03
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 128.68
77 TestFunctional/serial/KubeContext 0.13
78 TestFunctional/serial/KubectlGetPods 0.25
81 TestFunctional/serial/CacheCmd/cache/add_remote 27.05
82 TestFunctional/serial/CacheCmd/cache/add_local 11.79
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.27
84 TestFunctional/serial/CacheCmd/cache/list 0.26
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 9.7
86 TestFunctional/serial/CacheCmd/cache/cache_reload 37.49
87 TestFunctional/serial/CacheCmd/cache/delete 0.56
88 TestFunctional/serial/MinikubeKubectlCmd 0.52
92 TestFunctional/serial/LogsCmd 108.21
93 TestFunctional/serial/LogsFileCmd 180.72
105 TestFunctional/parallel/AddonsCmd 0.7
108 TestFunctional/parallel/SSHCmd 21.93
109 TestFunctional/parallel/CpCmd 55.65
111 TestFunctional/parallel/FileSync 11.28
112 TestFunctional/parallel/CertSync 61.4
118 TestFunctional/parallel/NonActiveRuntimeDisabled 10
120 TestFunctional/parallel/License 3.66
122 TestFunctional/parallel/UpdateContextCmd/no_changes 2.47
123 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 2.51
124 TestFunctional/parallel/UpdateContextCmd/no_clusters 2.5
125 TestFunctional/parallel/ProfileCmd/profile_not_create 10.62
126 TestFunctional/parallel/ProfileCmd/profile_list 10.74
129 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
132 TestFunctional/parallel/ProfileCmd/profile_json_output 10.46
137 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.23
149 TestFunctional/parallel/ImageCommands/Setup 2.42
153 TestFunctional/parallel/Version/short 0.26
154 TestFunctional/parallel/Version/components 7.86
156 TestFunctional/parallel/ImageCommands/ImageRemove 120.45
158 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 59.97
159 TestFunctional/delete_echo-server_images 0.02
160 TestFunctional/delete_my-image_image 0.01
161 TestFunctional/delete_minikube_cached_images 0.01
165 TestMultiControlPlane/serial/StartCluster 752.92
166 TestMultiControlPlane/serial/DeployApp 12.06
168 TestMultiControlPlane/serial/AddWorkerNode 275.9
169 TestMultiControlPlane/serial/NodeLabels 0.19
170 TestMultiControlPlane/serial/HAppyAfterClusterStart 30.55
174 TestImageBuild/serial/Setup 207.2
175 TestImageBuild/serial/NormalBuild 10.97
176 TestImageBuild/serial/BuildWithBuildArg 9.37
177 TestImageBuild/serial/BuildWithDockerIgnore 8.72
178 TestImageBuild/serial/BuildWithSpecifiedDockerfile 8.77
182 TestJSONOutput/start/Command 217.35
183 TestJSONOutput/start/Audit 0
185 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
188 TestJSONOutput/pause/Command 8.11
189 TestJSONOutput/pause/Audit 0
191 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/unpause/Command 8.05
195 TestJSONOutput/unpause/Audit 0
197 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/stop/Command 35.15
201 TestJSONOutput/stop/Audit 0
203 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
205 TestErrorJSONOutput 1.51
210 TestMainNoArgs 0.26
211 TestMinikubeProfile 544.85
214 TestMountStart/serial/StartWithMountFirst 160.23
215 TestMountStart/serial/VerifyMountFirst 9.91
216 TestMountStart/serial/StartWithMountSecond 163.28
217 TestMountStart/serial/VerifyMountSecond 9.93
218 TestMountStart/serial/DeleteFirst 28.18
219 TestMountStart/serial/VerifyMountPostDelete 9.78
220 TestMountStart/serial/Stop 28.01
224 TestMultiNode/serial/FreshStart2Nodes 458.88
225 TestMultiNode/serial/DeployApp2Nodes 9.28
227 TestMultiNode/serial/AddNode 243.33
228 TestMultiNode/serial/MultiNodeLabels 0.19
229 TestMultiNode/serial/ProfileList 12.4
230 TestMultiNode/serial/CopyFile 379.36
231 TestMultiNode/serial/StopNode 78.99
232 TestMultiNode/serial/StartAfterStop 200
237 TestPreload 546.81
238 TestScheduledStopWindows 340.73
248 TestNoKubernetes/serial/StartNoK8sWithVersion 0.44
x
+
TestDownloadOnly/v1.20.0/json-events (21.88s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-154100 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-154100 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperv: (21.8793311s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (21.88s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.5s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-154100
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-154100: exit status 85 (498.4333ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-154100 | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:29 UTC |          |
	|         | -p download-only-154100        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=hyperv                |                      |                   |         |                     |          |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/07 17:29:39
	Running on machine: minikube6
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0807 17:29:39.316214    8104 out.go:291] Setting OutFile to fd 624 ...
	I0807 17:29:39.317141    8104 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 17:29:39.317141    8104 out.go:304] Setting ErrFile to fd 628...
	I0807 17:29:39.317141    8104 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0807 17:29:39.330790    8104 root.go:314] Error reading config file at C:\Users\jenkins.minikube6\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube6\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I0807 17:29:39.343142    8104 out.go:298] Setting JSON to true
	I0807 17:29:39.346295    8104 start.go:129] hostinfo: {"hostname":"minikube6","uptime":313708,"bootTime":1722738070,"procs":189,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4717 Build 19045.4717","kernelVersion":"10.0.19045.4717 Build 19045.4717","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0807 17:29:39.346445    8104 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0807 17:29:39.354341    8104 out.go:97] [download-only-154100] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4717 Build 19045.4717
	I0807 17:29:39.354512    8104 notify.go:220] Checking for updates...
	W0807 17:29:39.354512    8104 preload.go:293] Failed to list preload files: open C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I0807 17:29:39.357020    8104 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0807 17:29:39.360777    8104 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0807 17:29:39.364922    8104 out.go:169] MINIKUBE_LOCATION=19389
	I0807 17:29:39.368278    8104 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0807 17:29:39.375066    8104 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0807 17:29:39.375916    8104 driver.go:392] Setting default libvirt URI to qemu:///system
	I0807 17:29:44.883028    8104 out.go:97] Using the hyperv driver based on user configuration
	I0807 17:29:44.883028    8104 start.go:297] selected driver: hyperv
	I0807 17:29:44.883359    8104 start.go:901] validating driver "hyperv" against <nil>
	I0807 17:29:44.883683    8104 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0807 17:29:44.935806    8104 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0807 17:29:44.937237    8104 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0807 17:29:44.937760    8104 cni.go:84] Creating CNI manager for ""
	I0807 17:29:44.938000    8104 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0807 17:29:44.938097    8104 start.go:340] cluster config:
	{Name:download-only-154100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-154100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 17:29:44.938822    8104 iso.go:125] acquiring lock: {Name:mk51465eaa337f49a286b30986b5f3d5f63e6787 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 17:29:44.944754    8104 out.go:97] Downloading VM boot image ...
	I0807 17:29:44.944754    8104 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\iso\amd64\minikube-v1.33.1-1722248113-19339-amd64.iso
	I0807 17:29:49.758392    8104 out.go:97] Starting "download-only-154100" primary control-plane node in "download-only-154100" cluster
	I0807 17:29:49.758392    8104 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0807 17:29:49.807111    8104 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0807 17:29:49.807111    8104 cache.go:56] Caching tarball of preloaded images
	I0807 17:29:49.807111    8104 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0807 17:29:49.811008    8104 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0807 17:29:49.811008    8104 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0807 17:29:49.887783    8104 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0807 17:29:53.411030    8104 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0807 17:29:53.412089    8104 preload.go:254] verifying checksum of C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0807 17:29:54.455134    8104 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0807 17:29:54.455449    8104 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\download-only-154100\config.json ...
	I0807 17:29:54.456161    8104 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\download-only-154100\config.json: {Name:mk6cb33fcc6dca5f40f1cea4c022a388ece9f50e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 17:29:54.456349    8104 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0807 17:29:54.458633    8104 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\windows\amd64\v1.20.0/kubectl.exe
	
	
	* The control-plane node download-only-154100 host does not exist
	  To start a cluster, run: "minikube start -p download-only-154100"

                                                
                                                
-- /stdout --
** stderr ** 
	W0807 17:30:01.197708    5316 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.50s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (1.34s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.3349986s)
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (1.34s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (1.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-154100
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-154100: (1.2073067s)
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (1.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (11.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-734300 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-734300 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=hyperv: (11.3077936s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (11.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
--- PASS: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-734300
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-734300: exit status 85 (301.823ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-154100 | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:29 UTC |                     |
	|         | -p download-only-154100        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                |                      |                   |         |                     |                     |
	| delete  | --all                          | minikube             | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:30 UTC | 07 Aug 24 17:30 UTC |
	| delete  | -p download-only-154100        | download-only-154100 | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:30 UTC | 07 Aug 24 17:30 UTC |
	| start   | -o=json --download-only        | download-only-734300 | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:30 UTC |                     |
	|         | -p download-only-734300        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                |                      |                   |         |                     |                     |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/07 17:30:04
	Running on machine: minikube6
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0807 17:30:04.310603    7188 out.go:291] Setting OutFile to fd 660 ...
	I0807 17:30:04.310603    7188 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 17:30:04.310603    7188 out.go:304] Setting ErrFile to fd 684...
	I0807 17:30:04.310603    7188 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 17:30:04.338660    7188 out.go:298] Setting JSON to true
	I0807 17:30:04.343396    7188 start.go:129] hostinfo: {"hostname":"minikube6","uptime":313733,"bootTime":1722738070,"procs":189,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4717 Build 19045.4717","kernelVersion":"10.0.19045.4717 Build 19045.4717","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0807 17:30:04.343396    7188 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0807 17:30:04.349169    7188 out.go:97] [download-only-734300] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4717 Build 19045.4717
	I0807 17:30:04.349169    7188 notify.go:220] Checking for updates...
	I0807 17:30:04.353341    7188 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0807 17:30:04.356040    7188 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0807 17:30:04.359108    7188 out.go:169] MINIKUBE_LOCATION=19389
	I0807 17:30:04.362151    7188 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0807 17:30:04.368676    7188 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0807 17:30:04.369616    7188 driver.go:392] Setting default libvirt URI to qemu:///system
	I0807 17:30:10.079314    7188 out.go:97] Using the hyperv driver based on user configuration
	I0807 17:30:10.079389    7188 start.go:297] selected driver: hyperv
	I0807 17:30:10.079389    7188 start.go:901] validating driver "hyperv" against <nil>
	I0807 17:30:10.079389    7188 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0807 17:30:10.127094    7188 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0807 17:30:10.129915    7188 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0807 17:30:10.130060    7188 cni.go:84] Creating CNI manager for ""
	I0807 17:30:10.130060    7188 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0807 17:30:10.130060    7188 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0807 17:30:10.130124    7188 start.go:340] cluster config:
	{Name:download-only-734300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-734300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 17:30:10.130124    7188 iso.go:125] acquiring lock: {Name:mk51465eaa337f49a286b30986b5f3d5f63e6787 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 17:30:10.133857    7188 out.go:97] Starting "download-only-734300" primary control-plane node in "download-only-734300" cluster
	I0807 17:30:10.133857    7188 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0807 17:30:10.179555    7188 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0807 17:30:10.180014    7188 cache.go:56] Caching tarball of preloaded images
	I0807 17:30:10.180086    7188 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0807 17:30:10.183856    7188 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0807 17:30:10.183856    7188 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 ...
	I0807 17:30:10.251664    7188 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4?checksum=md5:6304692df2fe6f7b0bdd7f93d160be8c -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0807 17:30:13.128856    7188 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 ...
	I0807 17:30:13.130224    7188 preload.go:254] verifying checksum of C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 ...
	I0807 17:30:14.036961    7188 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0807 17:30:14.037863    7188 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\download-only-734300\config.json ...
	I0807 17:30:14.037863    7188 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\download-only-734300\config.json: {Name:mk4dce33095ad28bc8c50efacb731f556b39b3ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 17:30:14.038643    7188 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0807 17:30:14.039829    7188 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\windows\amd64\v1.30.3/kubectl.exe
	
	
	* The control-plane node download-only-734300 host does not exist
	  To start a cluster, run: "minikube start -p download-only-734300"

                                                
                                                
-- /stdout --
** stderr ** 
	W0807 17:30:15.555763    3952 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (1.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.2868557s)
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (1.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (1.33s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-734300
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-734300: (1.3324917s)
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (1.33s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/json-events (18.76s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-481900 --force --alsologtostderr --kubernetes-version=v1.31.0-rc.0 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-481900 --force --alsologtostderr --kubernetes-version=v1.31.0-rc.0 --container-runtime=docker --driver=hyperv: (18.7619658s)
--- PASS: TestDownloadOnly/v1.31.0-rc.0/json-events (18.76s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/kubectl
--- PASS: TestDownloadOnly/v1.31.0-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-481900
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-481900: exit status 85 (302.5144ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-154100 | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:29 UTC |                     |
	|         | -p download-only-154100           |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr         |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |                   |         |                     |                     |
	|         | --container-runtime=docker        |                      |                   |         |                     |                     |
	|         | --driver=hyperv                   |                      |                   |         |                     |                     |
	| delete  | --all                             | minikube             | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:30 UTC | 07 Aug 24 17:30 UTC |
	| delete  | -p download-only-154100           | download-only-154100 | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:30 UTC | 07 Aug 24 17:30 UTC |
	| start   | -o=json --download-only           | download-only-734300 | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:30 UTC |                     |
	|         | -p download-only-734300           |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr         |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.30.3      |                      |                   |         |                     |                     |
	|         | --container-runtime=docker        |                      |                   |         |                     |                     |
	|         | --driver=hyperv                   |                      |                   |         |                     |                     |
	| delete  | --all                             | minikube             | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:30 UTC | 07 Aug 24 17:30 UTC |
	| delete  | -p download-only-734300           | download-only-734300 | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:30 UTC | 07 Aug 24 17:30 UTC |
	| start   | -o=json --download-only           | download-only-481900 | minikube6\jenkins | v1.33.1 | 07 Aug 24 17:30 UTC |                     |
	|         | -p download-only-481900           |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr         |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0 |                      |                   |         |                     |                     |
	|         | --container-runtime=docker        |                      |                   |         |                     |                     |
	|         | --driver=hyperv                   |                      |                   |         |                     |                     |
	|---------|-----------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/07 17:30:18
	Running on machine: minikube6
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0807 17:30:18.548518    1696 out.go:291] Setting OutFile to fd 732 ...
	I0807 17:30:18.549323    1696 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 17:30:18.549323    1696 out.go:304] Setting ErrFile to fd 736...
	I0807 17:30:18.549323    1696 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 17:30:18.573038    1696 out.go:298] Setting JSON to true
	I0807 17:30:18.575215    1696 start.go:129] hostinfo: {"hostname":"minikube6","uptime":313748,"bootTime":1722738070,"procs":189,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4717 Build 19045.4717","kernelVersion":"10.0.19045.4717 Build 19045.4717","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0807 17:30:18.576446    1696 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0807 17:30:18.582609    1696 out.go:97] [download-only-481900] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4717 Build 19045.4717
	I0807 17:30:18.583311    1696 notify.go:220] Checking for updates...
	I0807 17:30:18.585183    1696 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0807 17:30:18.588913    1696 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0807 17:30:18.591512    1696 out.go:169] MINIKUBE_LOCATION=19389
	I0807 17:30:18.594159    1696 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0807 17:30:18.600519    1696 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0807 17:30:18.601312    1696 driver.go:392] Setting default libvirt URI to qemu:///system
	I0807 17:30:24.072620    1696 out.go:97] Using the hyperv driver based on user configuration
	I0807 17:30:24.072620    1696 start.go:297] selected driver: hyperv
	I0807 17:30:24.072620    1696 start.go:901] validating driver "hyperv" against <nil>
	I0807 17:30:24.072620    1696 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0807 17:30:24.126805    1696 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0807 17:30:24.128256    1696 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0807 17:30:24.128256    1696 cni.go:84] Creating CNI manager for ""
	I0807 17:30:24.128256    1696 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0807 17:30:24.128256    1696 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0807 17:30:24.128256    1696 start.go:340] cluster config:
	{Name:download-only-481900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:download-only-481900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterva
l:1m0s}
	I0807 17:30:24.128779    1696 iso.go:125] acquiring lock: {Name:mk51465eaa337f49a286b30986b5f3d5f63e6787 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 17:30:24.131871    1696 out.go:97] Starting "download-only-481900" primary control-plane node in "download-only-481900" cluster
	I0807 17:30:24.132418    1696 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0807 17:30:24.187049    1696 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-rc.0/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-amd64.tar.lz4
	I0807 17:30:24.187416    1696 cache.go:56] Caching tarball of preloaded images
	I0807 17:30:24.187877    1696 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0807 17:30:24.211364    1696 out.go:97] Downloading Kubernetes v1.31.0-rc.0 preload ...
	I0807 17:30:24.211364    1696 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-amd64.tar.lz4 ...
	I0807 17:30:24.281545    1696 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-rc.0/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-amd64.tar.lz4?checksum=md5:214beb6d5aadd59deaf940ce47a22f04 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-amd64.tar.lz4
	I0807 17:30:28.708126    1696 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-amd64.tar.lz4 ...
	I0807 17:30:28.708850    1696 preload.go:254] verifying checksum of C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-amd64.tar.lz4 ...
	I0807 17:30:29.597702    1696 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-rc.0 on docker
	I0807 17:30:29.598646    1696 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\download-only-481900\config.json ...
	I0807 17:30:29.598900    1696 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\download-only-481900\config.json: {Name:mkf216542285923d42e9b11da34630be3dd3c710 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 17:30:29.599806    1696 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0807 17:30:29.601101    1696 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0-rc.0/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\windows\amd64\v1.31.0-rc.0/kubectl.exe
	
	
	* The control-plane node download-only-481900 host does not exist
	  To start a cluster, run: "minikube start -p download-only-481900"

                                                
                                                
-- /stdout --
** stderr ** 
	W0807 17:30:37.235114   11288 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-rc.0/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/DeleteAll (1.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.2854305s)
--- PASS: TestDownloadOnly/v1.31.0-rc.0/DeleteAll (1.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds (1.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-481900
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-481900: (1.2046627s)
--- PASS: TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds (1.20s)

                                                
                                    
x
+
TestBinaryMirror (7.2s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-249700 --alsologtostderr --binary-mirror http://127.0.0.1:65495 --driver=hyperv
aaa_download_only_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-249700 --alsologtostderr --binary-mirror http://127.0.0.1:65495 --driver=hyperv: (6.3236615s)
helpers_test.go:175: Cleaning up "binary-mirror-249700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-249700
--- PASS: TestBinaryMirror (7.20s)

                                                
                                    
x
+
TestOffline (292.87s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-662200 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv
aab_offline_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-662200 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv: (4m11.4421244s)
helpers_test.go:175: Cleaning up "offline-docker-662200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-662200
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-662200: (41.4231562s)
--- PASS: TestOffline (292.87s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.29s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-463600
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable dashboard -p addons-463600: exit status 85 (290.553ms)

                                                
                                                
-- stdout --
	* Profile "addons-463600" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-463600"

                                                
                                                
-- /stdout --
** stderr ** 
	W0807 17:30:50.973545    9848 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.29s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.28s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-463600
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons disable dashboard -p addons-463600: exit status 85 (278.8774ms)

                                                
                                                
-- stdout --
	* Profile "addons-463600" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-463600"

                                                
                                                
-- /stdout --
** stderr ** 
	W0807 17:30:50.973545    2372 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.28s)

                                                
                                    
x
+
TestAddons/Setup (449.28s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-463600 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-463600 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller: (7m29.276557s)
--- PASS: TestAddons/Setup (449.28s)

                                                
                                    
x
+
TestAddons/serial/Volcano (66.36s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:897: volcano-scheduler stabilized in 21.9388ms
addons_test.go:913: volcano-controller stabilized in 22.0957ms
addons_test.go:905: volcano-admission stabilized in 22.0957ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-844f6db89b-pm45f" [56b03d66-20dc-4448-b8c1-f78a4d083d6d] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.0074868s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5f7844f7bc-x7gwk" [f2139a10-413e-43b7-897a-ece91643bb92] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.0177417s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-59cb4746db-h964k" [47901933-62d9-47e3-bd79-c46b89854cc0] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.0124078s
addons_test.go:932: (dbg) Run:  kubectl --context addons-463600 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-463600 create -f testdata\vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-463600 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [d66a829b-1b1a-4ff2-9de2-e13309aa2ff8] Pending
helpers_test.go:344: "test-job-nginx-0" [d66a829b-1b1a-4ff2-9de2-e13309aa2ff8] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [d66a829b-1b1a-4ff2-9de2-e13309aa2ff8] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 25.0055479s
addons_test.go:968: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-463600 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-windows-amd64.exe -p addons-463600 addons disable volcano --alsologtostderr -v=1: (25.390551s)
--- PASS: TestAddons/serial/Volcano (66.36s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.33s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-463600 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-463600 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.33s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (73.3s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-463600 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-463600 replace --force -f testdata\nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-463600 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [49d574e6-7d3a-4bba-bfbd-69d35e9281c4] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [49d574e6-7d3a-4bba-bfbd-69d35e9281c4] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 14.0210798s
addons_test.go:264: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-463600 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Done: out/minikube-windows-amd64.exe -p addons-463600 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (11.2145047s)
addons_test.go:271: debug: unexpected stderr for out/minikube-windows-amd64.exe -p addons-463600 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'":
W0807 17:40:54.998421    5904 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
addons_test.go:288: (dbg) Run:  kubectl --context addons-463600 replace --force -f testdata\ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-463600 ip
addons_test.go:293: (dbg) Done: out/minikube-windows-amd64.exe -p addons-463600 ip: (2.849275s)
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 172.28.235.128
addons_test.go:308: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-463600 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-windows-amd64.exe -p addons-463600 addons disable ingress-dns --alsologtostderr -v=1: (17.7579614s)
addons_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-463600 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-windows-amd64.exe -p addons-463600 addons disable ingress --alsologtostderr -v=1: (25.2936869s)
--- PASS: TestAddons/parallel/Ingress (73.30s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (27.85s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-bmrgx" [5665dc72-898b-40d9-a897-8ed886b5a420] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.0180478s
addons_test.go:851: (dbg) Run:  out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-463600
addons_test.go:851: (dbg) Done: out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-463600: (22.8263972s)
--- PASS: TestAddons/parallel/InspektorGadget (27.85s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (23.25s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 4.936ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-gp89x" [4d36f4d2-262c-4b94-8f77-27b75d7ee197] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.0154103s
addons_test.go:417: (dbg) Run:  kubectl --context addons-463600 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-463600 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:434: (dbg) Done: out/minikube-windows-amd64.exe -p addons-463600 addons disable metrics-server --alsologtostderr -v=1: (16.9705348s)
--- PASS: TestAddons/parallel/MetricsServer (23.25s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (31.35s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 4.936ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-26bqt" [93227ebf-f8e0-434f-8d0d-933675f73c4a] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.0207148s
addons_test.go:475: (dbg) Run:  kubectl --context addons-463600 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-463600 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (8.4069265s)
addons_test.go:492: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-463600 addons disable helm-tiller --alsologtostderr -v=1
addons_test.go:492: (dbg) Done: out/minikube-windows-amd64.exe -p addons-463600 addons disable helm-tiller --alsologtostderr -v=1: (16.9031687s)
--- PASS: TestAddons/parallel/HelmTiller (31.35s)

                                                
                                    
x
+
TestAddons/parallel/CSI (93.53s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 9.9195ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-463600 create -f testdata\csi-hostpath-driver\pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463600 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-463600 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [04129b7c-6160-489f-89a3-5225884fb4de] Pending
helpers_test.go:344: "task-pv-pod" [04129b7c-6160-489f-89a3-5225884fb4de] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [04129b7c-6160-489f-89a3-5225884fb4de] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.0186394s
addons_test.go:590: (dbg) Run:  kubectl --context addons-463600 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-463600 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-463600 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-463600 delete pod task-pv-pod
addons_test.go:600: (dbg) Done: kubectl --context addons-463600 delete pod task-pv-pod: (1.8367506s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-463600 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-463600 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-463600 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [d904a45e-e727-4268-a462-0de6484f0722] Pending
helpers_test.go:344: "task-pv-pod-restore" [d904a45e-e727-4268-a462-0de6484f0722] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [d904a45e-e727-4268-a462-0de6484f0722] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.0113727s
addons_test.go:632: (dbg) Run:  kubectl --context addons-463600 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Done: kubectl --context addons-463600 delete pod task-pv-pod-restore: (1.4894321s)
addons_test.go:636: (dbg) Run:  kubectl --context addons-463600 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-463600 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-463600 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-windows-amd64.exe -p addons-463600 addons disable csi-hostpath-driver --alsologtostderr -v=1: (23.9273688s)
addons_test.go:648: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-463600 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:648: (dbg) Done: out/minikube-windows-amd64.exe -p addons-463600 addons disable volumesnapshots --alsologtostderr -v=1: (18.433614s)
--- PASS: TestAddons/parallel/CSI (93.53s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (41.71s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-windows-amd64.exe addons enable headlamp -p addons-463600 --alsologtostderr -v=1
addons_test.go:830: (dbg) Done: out/minikube-windows-amd64.exe addons enable headlamp -p addons-463600 --alsologtostderr -v=1: (17.9401178s)
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-9d868696f-nr5pf" [35dfd3c2-75ac-4b4e-ab62-fd995536cae3] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-9d868696f-nr5pf" [35dfd3c2-75ac-4b4e-ab62-fd995536cae3] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 16.0096122s
addons_test.go:839: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-463600 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-windows-amd64.exe -p addons-463600 addons disable headlamp --alsologtostderr -v=1: (7.7518403s)
--- PASS: TestAddons/parallel/Headlamp (41.71s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (22.44s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5455fb9b69-jb57p" [9f7fba2a-0949-4150-85f8-334acf987a7f] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.0445509s
addons_test.go:870: (dbg) Run:  out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-463600
addons_test.go:870: (dbg) Done: out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-463600: (17.3852676s)
--- PASS: TestAddons/parallel/CloudSpanner (22.44s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (35.98s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-463600 apply -f testdata\storage-provisioner-rancher\pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-463600 apply -f testdata\storage-provisioner-rancher\pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463600 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463600 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463600 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463600 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463600 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463600 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463600 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [4b6886d2-a524-4b73-8b29-691c966daf4f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [4b6886d2-a524-4b73-8b29-691c966daf4f] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [4b6886d2-a524-4b73-8b29-691c966daf4f] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 9.0118904s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-463600 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-463600 ssh "cat /opt/local-path-provisioner/pvc-5271f59d-3a51-427c-baa4-cc93a8edce35_default_test-pvc/file1"
addons_test.go:1009: (dbg) Done: out/minikube-windows-amd64.exe -p addons-463600 ssh "cat /opt/local-path-provisioner/pvc-5271f59d-3a51-427c-baa4-cc93a8edce35_default_test-pvc/file1": (10.7864268s)
addons_test.go:1021: (dbg) Run:  kubectl --context addons-463600 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-463600 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-463600 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-windows-amd64.exe -p addons-463600 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (8.5127376s)
--- PASS: TestAddons/parallel/LocalPath (35.98s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (20.65s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-48k52" [40123ef3-34c0-4437-a8f7-020468494af8] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.0200867s
addons_test.go:1064: (dbg) Run:  out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-463600
addons_test.go:1064: (dbg) Done: out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-463600: (15.6265019s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (20.65s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (26.93s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-dfr7k" [db0e51a5-9c6a-41b6-af72-43ff5f07cec4] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.0092538s
addons_test.go:1076: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-463600 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-windows-amd64.exe -p addons-463600 addons disable yakd --alsologtostderr -v=1: (21.9113616s)
--- PASS: TestAddons/parallel/Yakd (26.93s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (55.77s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-463600
addons_test.go:174: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-463600: (42.4453828s)
addons_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-463600
addons_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p addons-463600: (5.2547988s)
addons_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-463600
addons_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe addons disable dashboard -p addons-463600: (5.1201203s)
addons_test.go:187: (dbg) Run:  out/minikube-windows-amd64.exe addons disable gvisor -p addons-463600
addons_test.go:187: (dbg) Done: out/minikube-windows-amd64.exe addons disable gvisor -p addons-463600: (2.9468487s)
--- PASS: TestAddons/StoppedEnableDisable (55.77s)

                                                
                                    
x
+
TestForceSystemdFlag (411.63s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-782300 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv
docker_test.go:91: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-flag-782300 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv: (5m59.2189269s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-782300 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-flag-782300 ssh "docker info --format {{.CgroupDriver}}": (10.8050457s)
helpers_test.go:175: Cleaning up "force-systemd-flag-782300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-782300
E0807 20:28:20.573800    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\client.crt: The system cannot find the path specified.
E0807 20:28:38.196396    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-100700\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-782300: (41.600498s)
--- PASS: TestForceSystemdFlag (411.63s)

                                                
                                    
x
+
TestErrorSpam/start (17.75s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-974300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-974300 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-974300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-974300 start --dry-run: (5.9436549s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-974300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-974300 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-974300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-974300 start --dry-run: (5.8741645s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-974300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-974300 start --dry-run
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-974300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-974300 start --dry-run: (5.9272347s)
--- PASS: TestErrorSpam/start (17.75s)

                                                
                                    
x
+
TestErrorSpam/status (38.22s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-974300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-974300 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-974300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-974300 status: (13.1501141s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-974300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-974300 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-974300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-974300 status: (12.5823469s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-974300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-974300 status
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-974300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-974300 status: (12.4820131s)
--- PASS: TestErrorSpam/status (38.22s)

                                                
                                    
x
+
TestErrorSpam/pause (23.6s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-974300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-974300 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-974300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-974300 pause: (8.0722466s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-974300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-974300 pause
E0807 17:48:20.447477    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\client.crt: The system cannot find the path specified.
E0807 17:48:20.462134    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\client.crt: The system cannot find the path specified.
E0807 17:48:20.478114    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\client.crt: The system cannot find the path specified.
E0807 17:48:20.510424    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\client.crt: The system cannot find the path specified.
E0807 17:48:20.557735    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\client.crt: The system cannot find the path specified.
E0807 17:48:20.651738    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\client.crt: The system cannot find the path specified.
E0807 17:48:20.826369    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\client.crt: The system cannot find the path specified.
E0807 17:48:21.158647    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\client.crt: The system cannot find the path specified.
E0807 17:48:21.810791    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\client.crt: The system cannot find the path specified.
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-974300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-974300 pause: (7.7846768s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-974300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-974300 pause
E0807 17:48:23.093935    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\client.crt: The system cannot find the path specified.
E0807 17:48:25.657020    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\client.crt: The system cannot find the path specified.
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-974300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-974300 pause: (7.7378305s)
--- PASS: TestErrorSpam/pause (23.60s)

                                                
                                    
x
+
TestErrorSpam/unpause (23.73s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-974300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-974300 unpause
E0807 17:48:30.791984    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\client.crt: The system cannot find the path specified.
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-974300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-974300 unpause: (8.0239511s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-974300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-974300 unpause
E0807 17:48:41.033346    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\client.crt: The system cannot find the path specified.
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-974300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-974300 unpause: (7.8843287s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-974300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-974300 unpause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-974300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-974300 unpause: (7.8220668s)
--- PASS: TestErrorSpam/unpause (23.73s)

                                                
                                    
x
+
TestErrorSpam/stop (58.83s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-974300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-974300 stop
E0807 17:49:01.515592    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\client.crt: The system cannot find the path specified.
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-974300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-974300 stop: (35.5430983s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-974300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-974300 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-974300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-974300 stop: (11.890042s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-974300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-974300 stop
E0807 17:49:42.484633    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\client.crt: The system cannot find the path specified.
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-974300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-974300 stop: (11.3991693s)
--- PASS: TestErrorSpam/stop (58.83s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\9660\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.04s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (250.03s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-100700 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv
E0807 17:51:04.410920    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\client.crt: The system cannot find the path specified.
E0807 17:53:20.453779    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\client.crt: The system cannot find the path specified.
E0807 17:53:48.256693    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\client.crt: The system cannot find the path specified.
functional_test.go:2230: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-100700 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv: (4m10.0149372s)
--- PASS: TestFunctional/serial/StartWithProxy (250.03s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (128.68s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-100700 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-100700 --alsologtostderr -v=8: (2m8.6799373s)
functional_test.go:659: soft start took 2m8.6821498s for "functional-100700" cluster.
--- PASS: TestFunctional/serial/SoftStart (128.68s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.13s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.25s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-100700 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (27.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-100700 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-100700 cache add registry.k8s.io/pause:3.1: (9.0652981s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-100700 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-100700 cache add registry.k8s.io/pause:3.3: (8.9962618s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-100700 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-100700 cache add registry.k8s.io/pause:latest: (8.9860749s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (27.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (11.79s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-100700 C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local2815901408\001
functional_test.go:1073: (dbg) Done: docker build -t minikube-local-cache-test:functional-100700 C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local2815901408\001: (2.729145s)
functional_test.go:1085: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-100700 cache add minikube-local-cache-test:functional-100700
functional_test.go:1085: (dbg) Done: out/minikube-windows-amd64.exe -p functional-100700 cache add minikube-local-cache-test:functional-100700: (8.5776982s)
functional_test.go:1090: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-100700 cache delete minikube-local-cache-test:functional-100700
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-100700
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (11.79s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (9.7s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-100700 ssh sudo crictl images
functional_test.go:1120: (dbg) Done: out/minikube-windows-amd64.exe -p functional-100700 ssh sudo crictl images: (9.7015786s)
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (9.70s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (37.49s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-100700 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Done: out/minikube-windows-amd64.exe -p functional-100700 ssh sudo docker rmi registry.k8s.io/pause:latest: (9.6545446s)
functional_test.go:1149: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-100700 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-100700 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (9.7096341s)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	W0807 17:57:29.256832    5720 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-100700 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-windows-amd64.exe -p functional-100700 cache reload: (8.3945938s)
functional_test.go:1159: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-100700 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Done: out/minikube-windows-amd64.exe -p functional-100700 ssh sudo crictl inspecti registry.k8s.io/pause:latest: (9.7261106s)
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (37.49s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.56s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.56s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.52s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-100700 kubectl -- --context functional-100700 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.52s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (108.21s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-100700 logs
E0807 18:08:20.466161    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\client.crt: The system cannot find the path specified.
functional_test.go:1232: (dbg) Done: out/minikube-windows-amd64.exe -p functional-100700 logs: (1m48.2110492s)
--- PASS: TestFunctional/serial/LogsCmd (108.21s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (180.72s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-100700 logs --file C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalserialLogsFileCmd1104791814\001\logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-windows-amd64.exe -p functional-100700 logs --file C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalserialLogsFileCmd1104791814\001\logs.txt: (3m0.7185708s)
--- PASS: TestFunctional/serial/LogsFileCmd (180.72s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-100700 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-100700 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (21.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-100700 ssh "echo hello"
functional_test.go:1721: (dbg) Done: out/minikube-windows-amd64.exe -p functional-100700 ssh "echo hello": (11.6701258s)
functional_test.go:1738: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-100700 ssh "cat /etc/hostname"
functional_test.go:1738: (dbg) Done: out/minikube-windows-amd64.exe -p functional-100700 ssh "cat /etc/hostname": (10.2561741s)
--- PASS: TestFunctional/parallel/SSHCmd (21.93s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (55.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-100700 cp testdata\cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-100700 cp testdata\cp-test.txt /home/docker/cp-test.txt: (7.8998413s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-100700 ssh -n functional-100700 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-100700 ssh -n functional-100700 "sudo cat /home/docker/cp-test.txt": (10.2550045s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-100700 cp functional-100700:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalparallelCpCmd1063066128\001\cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-100700 cp functional-100700:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalparallelCpCmd1063066128\001\cp-test.txt: (9.9972827s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-100700 ssh -n functional-100700 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-100700 ssh -n functional-100700 "sudo cat /home/docker/cp-test.txt": (9.94266s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-100700 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-100700 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt: (7.6220308s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-100700 ssh -n functional-100700 "sudo cat /tmp/does/not/exist/cp-test.txt"
E0807 18:13:20.465730    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-100700 ssh -n functional-100700 "sudo cat /tmp/does/not/exist/cp-test.txt": (9.9227245s)
--- PASS: TestFunctional/parallel/CpCmd (55.65s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (11.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/9660/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-100700 ssh "sudo cat /etc/test/nested/copy/9660/hosts"
functional_test.go:1927: (dbg) Done: out/minikube-windows-amd64.exe -p functional-100700 ssh "sudo cat /etc/test/nested/copy/9660/hosts": (11.2826966s)
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (11.28s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (61.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/9660.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-100700 ssh "sudo cat /etc/ssl/certs/9660.pem"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-100700 ssh "sudo cat /etc/ssl/certs/9660.pem": (10.9197634s)
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/9660.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-100700 ssh "sudo cat /usr/share/ca-certificates/9660.pem"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-100700 ssh "sudo cat /usr/share/ca-certificates/9660.pem": (10.3136946s)
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-100700 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-100700 ssh "sudo cat /etc/ssl/certs/51391683.0": (10.2835987s)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/96602.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-100700 ssh "sudo cat /etc/ssl/certs/96602.pem"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-100700 ssh "sudo cat /etc/ssl/certs/96602.pem": (10.0310604s)
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/96602.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-100700 ssh "sudo cat /usr/share/ca-certificates/96602.pem"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-100700 ssh "sudo cat /usr/share/ca-certificates/96602.pem": (9.8946499s)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-100700 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-100700 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": (9.9506097s)
--- PASS: TestFunctional/parallel/CertSync (61.40s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (10s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-100700 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-100700 ssh "sudo systemctl is-active crio": exit status 1 (10.0035128s)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	W0807 18:23:24.819941    7192 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (10.00s)

                                                
                                    
x
+
TestFunctional/parallel/License (3.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-windows-amd64.exe license
functional_test.go:2284: (dbg) Done: out/minikube-windows-amd64.exe license: (3.6331009s)
--- PASS: TestFunctional/parallel/License (3.66s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (2.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-100700 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-100700 update-context --alsologtostderr -v=2: (2.4733994s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (2.47s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-100700 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-100700 update-context --alsologtostderr -v=2: (2.5040124s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.51s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (2.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-100700 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-100700 update-context --alsologtostderr -v=2: (2.4985818s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (2.50s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (10.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-windows-amd64.exe profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
functional_test.go:1271: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (10.1675478s)
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (10.62s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (10.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-windows-amd64.exe profile list
functional_test.go:1306: (dbg) Done: out/minikube-windows-amd64.exe profile list: (10.4924096s)
functional_test.go:1311: Took "10.4925163s" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1325: Took "250.6018ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (10.74s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-100700 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (10.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json
functional_test.go:1357: (dbg) Done: out/minikube-windows-amd64.exe profile list -o json: (10.2163566s)
functional_test.go:1362: Took "10.216819s" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1375: Took "242.0564ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (10.46s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-100700 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 3244: TerminateProcess: Access is denied.
helpers_test.go:508: unable to kill pid 3628: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:341: (dbg) Done: docker pull docker.io/kicbase/echo-server:1.0: (2.1682064s)
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-100700
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.42s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-100700 version --short
--- PASS: TestFunctional/parallel/Version/short (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (7.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-100700 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-windows-amd64.exe -p functional-100700 version -o=json --components: (7.856234s)
--- PASS: TestFunctional/parallel/Version/components (7.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (120.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-100700 image rm docker.io/kicbase/echo-server:functional-100700 --alsologtostderr
functional_test.go:391: (dbg) Done: out/minikube-windows-amd64.exe -p functional-100700 image rm docker.io/kicbase/echo-server:functional-100700 --alsologtostderr: (1m0.2090463s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-100700 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-100700 image ls: (1m0.2388427s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (120.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (59.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-100700
functional_test.go:423: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-100700 image save --daemon docker.io/kicbase/echo-server:functional-100700 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-windows-amd64.exe -p functional-100700 image save --daemon docker.io/kicbase/echo-server:functional-100700 --alsologtostderr: (59.6048382s)
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-100700
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (59.97s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Non-zero exit: docker rmi -f docker.io/kicbase/echo-server:1.0: context deadline exceeded (0s)
functional_test.go:191: failed to remove image "docker.io/kicbase/echo-server:1.0" from docker images. args "docker rmi -f docker.io/kicbase/echo-server:1.0": context deadline exceeded
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-100700
functional_test.go:189: (dbg) Non-zero exit: docker rmi -f docker.io/kicbase/echo-server:functional-100700: context deadline exceeded (0s)
functional_test.go:191: failed to remove image "docker.io/kicbase/echo-server:functional-100700" from docker images. args "docker rmi -f docker.io/kicbase/echo-server:functional-100700": context deadline exceeded
--- PASS: TestFunctional/delete_echo-server_images (0.02s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-100700
functional_test.go:197: (dbg) Non-zero exit: docker rmi -f localhost/my-image:functional-100700: context deadline exceeded (0s)
functional_test.go:199: failed to remove image my-image from docker images. args "docker rmi -f localhost/my-image:functional-100700": context deadline exceeded
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-100700
functional_test.go:205: (dbg) Non-zero exit: docker rmi -f minikube-local-cache-test:functional-100700: context deadline exceeded (0s)
functional_test.go:207: failed to remove image minikube local cache test images from docker. args "docker rmi -f minikube-local-cache-test:functional-100700": context deadline exceeded
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (752.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p ha-766300 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperv
E0807 18:33:20.481423    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\client.crt: The system cannot find the path specified.
E0807 18:33:38.116113    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-100700\client.crt: The system cannot find the path specified.
E0807 18:33:38.130739    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-100700\client.crt: The system cannot find the path specified.
E0807 18:33:38.146008    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-100700\client.crt: The system cannot find the path specified.
E0807 18:33:38.178111    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-100700\client.crt: The system cannot find the path specified.
E0807 18:33:38.225657    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-100700\client.crt: The system cannot find the path specified.
E0807 18:33:38.321148    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-100700\client.crt: The system cannot find the path specified.
E0807 18:33:38.495501    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-100700\client.crt: The system cannot find the path specified.
E0807 18:33:38.831420    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-100700\client.crt: The system cannot find the path specified.
E0807 18:33:39.484196    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-100700\client.crt: The system cannot find the path specified.
E0807 18:33:40.775540    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-100700\client.crt: The system cannot find the path specified.
E0807 18:33:43.349615    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-100700\client.crt: The system cannot find the path specified.
E0807 18:33:48.477619    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-100700\client.crt: The system cannot find the path specified.
E0807 18:33:58.732681    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-100700\client.crt: The system cannot find the path specified.
E0807 18:34:19.218620    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-100700\client.crt: The system cannot find the path specified.
E0807 18:35:00.182001    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-100700\client.crt: The system cannot find the path specified.
E0807 18:36:22.104149    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-100700\client.crt: The system cannot find the path specified.
E0807 18:38:03.670435    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\client.crt: The system cannot find the path specified.
E0807 18:38:20.491540    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\client.crt: The system cannot find the path specified.
E0807 18:38:38.106165    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-100700\client.crt: The system cannot find the path specified.
E0807 18:39:05.955737    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-100700\client.crt: The system cannot find the path specified.
E0807 18:43:20.487714    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\client.crt: The system cannot find the path specified.
ha_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe start -p ha-766300 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperv: (11m54.7176034s)
ha_test.go:107: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-766300 status -v=7 --alsologtostderr
E0807 18:43:38.116006    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-100700\client.crt: The system cannot find the path specified.
ha_test.go:107: (dbg) Done: out/minikube-windows-amd64.exe -p ha-766300 status -v=7 --alsologtostderr: (38.2009256s)
--- PASS: TestMultiControlPlane/serial/StartCluster (752.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (12.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-766300 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-766300 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-766300 -- rollout status deployment/busybox: (3.8979009s)
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-766300 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-766300 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-766300 -- exec busybox-fc5497c4f-bjlr2 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-766300 -- exec busybox-fc5497c4f-bjlr2 -- nslookup kubernetes.io: (1.9612809s)
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-766300 -- exec busybox-fc5497c4f-vzv8c -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-766300 -- exec busybox-fc5497c4f-wf2xw -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-766300 -- exec busybox-fc5497c4f-bjlr2 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-766300 -- exec busybox-fc5497c4f-vzv8c -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-766300 -- exec busybox-fc5497c4f-wf2xw -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-766300 -- exec busybox-fc5497c4f-bjlr2 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-766300 -- exec busybox-fc5497c4f-vzv8c -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-766300 -- exec busybox-fc5497c4f-wf2xw -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (12.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (275.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe node add -p ha-766300 -v=7 --alsologtostderr
E0807 18:48:20.490032    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\client.crt: The system cannot find the path specified.
E0807 18:48:38.114829    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-100700\client.crt: The system cannot find the path specified.
ha_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe node add -p ha-766300 -v=7 --alsologtostderr: (3m44.6131951s)
ha_test.go:234: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-766300 status -v=7 --alsologtostderr
E0807 18:50:01.332124    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-100700\client.crt: The system cannot find the path specified.
ha_test.go:234: (dbg) Done: out/minikube-windows-amd64.exe -p ha-766300 status -v=7 --alsologtostderr: (51.2867239s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (275.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-766300 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (30.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (30.5492088s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (30.55s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (207.2s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -p image-962700 --driver=hyperv
E0807 19:06:41.358187    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-100700\client.crt: The system cannot find the path specified.
image_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -p image-962700 --driver=hyperv: (3m27.2014461s)
--- PASS: TestImageBuild/serial/Setup (207.20s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (10.97s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-962700
E0807 19:08:20.510931    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\client.crt: The system cannot find the path specified.
image_test.go:78: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-962700: (10.9688116s)
--- PASS: TestImageBuild/serial/NormalBuild (10.97s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (9.37s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-962700
image_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-962700: (9.3677823s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (9.37s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (8.72s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-962700
E0807 19:08:38.129450    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-100700\client.crt: The system cannot find the path specified.
image_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-962700: (8.7177444s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (8.72s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (8.77s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-962700
image_test.go:88: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-962700: (8.7721115s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (8.77s)

                                                
                                    
x
+
TestJSONOutput/start/Command (217.35s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-600700 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv
E0807 19:11:23.709068    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\client.crt: The system cannot find the path specified.
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-600700 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv: (3m37.3465044s)
--- PASS: TestJSONOutput/start/Command (217.35s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (8.11s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-600700 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-600700 --output=json --user=testUser: (8.1045748s)
--- PASS: TestJSONOutput/pause/Command (8.11s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (8.05s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-600700 --output=json --user=testUser
E0807 19:13:20.507840    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\client.crt: The system cannot find the path specified.
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe unpause -p json-output-600700 --output=json --user=testUser: (8.0514601s)
--- PASS: TestJSONOutput/unpause/Command (8.05s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (35.15s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-600700 --output=json --user=testUser
E0807 19:13:38.144503    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-100700\client.crt: The system cannot find the path specified.
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-600700 --output=json --user=testUser: (35.1470988s)
--- PASS: TestJSONOutput/stop/Command (35.15s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (1.51s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-448500 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-448500 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (274.3503ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"da5d3436-57b1-418e-85a0-983b01f68b27","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-448500] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4717 Build 19045.4717","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d9bd02b4-f067-494a-bc91-a931870aa2a2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube6\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"dd66444d-23e4-4dda-bf49-baab9918a6ed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"095bf6bf-2866-464a-8e85-1f8af74e724e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"5e5c88cd-4d7f-4395-9ddf-9d842846b66f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19389"}}
	{"specversion":"1.0","id":"7a3f0e41-250d-4253-ab84-0ae8908db100","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c8e851f0-f8d1-4919-8e6d-0ebf7dd5bcc0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
** stderr ** 
	W0807 19:14:17.080898   10136 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "json-output-error-448500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-448500
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p json-output-error-448500: (1.2318833s)
--- PASS: TestErrorJSONOutput (1.51s)

                                                
                                    
x
+
TestMainNoArgs (0.26s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.26s)

                                                
                                    
x
+
TestMinikubeProfile (544.85s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-576300 --driver=hyperv
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p first-576300 --driver=hyperv: (3m21.2555539s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p second-576300 --driver=hyperv
E0807 19:18:20.520932    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\client.crt: The system cannot find the path specified.
E0807 19:18:38.148939    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-100700\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p second-576300 --driver=hyperv: (3m25.7761833s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile first-576300
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (22.5118848s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile second-576300
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (22.2580139s)
helpers_test.go:175: Cleaning up "second-576300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-576300
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-576300: (45.9488473s)
helpers_test.go:175: Cleaning up "first-576300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-576300
E0807 19:23:20.514260    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\client.crt: The system cannot find the path specified.
E0807 19:23:21.385075    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-100700\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-576300: (46.2356687s)
--- PASS: TestMinikubeProfile (544.85s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (160.23s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-878600 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv
E0807 19:23:38.146720    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-100700\client.crt: The system cannot find the path specified.
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-878600 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m39.2148609s)
--- PASS: TestMountStart/serial/StartWithMountFirst (160.23s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (9.91s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-878600 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-1-878600 ssh -- ls /minikube-host: (9.9138961s)
--- PASS: TestMountStart/serial/VerifyMountFirst (9.91s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (163.28s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-878600 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv
E0807 19:28:03.731643    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\client.crt: The system cannot find the path specified.
E0807 19:28:20.525343    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\client.crt: The system cannot find the path specified.
E0807 19:28:38.142951    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-100700\client.crt: The system cannot find the path specified.
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-878600 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m42.279713s)
--- PASS: TestMountStart/serial/StartWithMountSecond (163.28s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (9.93s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-878600 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-878600 ssh -- ls /minikube-host: (9.9246803s)
--- PASS: TestMountStart/serial/VerifyMountSecond (9.93s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (28.18s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-878600 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-878600 --alsologtostderr -v=5: (28.1830859s)
--- PASS: TestMountStart/serial/DeleteFirst (28.18s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (9.78s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-878600 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-878600 ssh -- ls /minikube-host: (9.7752385s)
--- PASS: TestMountStart/serial/VerifyMountPostDelete (9.78s)

                                                
                                    
x
+
TestMountStart/serial/Stop (28.01s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-878600
mount_start_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe stop -p mount-start-2-878600: (28.0130836s)
--- PASS: TestMountStart/serial/Stop (28.01s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (458.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-116700 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv
E0807 19:38:20.531943    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\client.crt: The system cannot find the path specified.
E0807 19:38:38.157961    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-100700\client.crt: The system cannot find the path specified.
E0807 19:40:01.406464    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-100700\client.crt: The system cannot find the path specified.
multinode_test.go:96: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-116700 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv: (7m13.9150472s)
multinode_test.go:102: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-116700 status --alsologtostderr
multinode_test.go:102: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-116700 status --alsologtostderr: (24.96263s)
--- PASS: TestMultiNode/serial/FreshStart2Nodes (458.88s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (9.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-116700 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-116700 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-116700 -- rollout status deployment/busybox: (2.9200103s)
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-116700 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-116700 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-116700 -- exec busybox-fc5497c4f-jpc88 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-116700 -- exec busybox-fc5497c4f-jpc88 -- nslookup kubernetes.io: (2.1110922s)
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-116700 -- exec busybox-fc5497c4f-s4njd -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-116700 -- exec busybox-fc5497c4f-jpc88 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-116700 -- exec busybox-fc5497c4f-s4njd -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-116700 -- exec busybox-fc5497c4f-jpc88 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-116700 -- exec busybox-fc5497c4f-s4njd -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (9.28s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (243.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-116700 -v 3 --alsologtostderr
E0807 19:43:20.533945    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\client.crt: The system cannot find the path specified.
E0807 19:43:38.159861    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-100700\client.crt: The system cannot find the path specified.
E0807 19:44:43.758525    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\client.crt: The system cannot find the path specified.
multinode_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe node add -p multinode-116700 -v 3 --alsologtostderr: (3m26.1593788s)
multinode_test.go:127: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-116700 status --alsologtostderr
multinode_test.go:127: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-116700 status --alsologtostderr: (37.1738262s)
--- PASS: TestMultiNode/serial/AddNode (243.33s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-116700 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.19s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (12.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:143: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (12.3938s)
--- PASS: TestMultiNode/serial/ProfileList (12.40s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (379.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-116700 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-116700 status --output json --alsologtostderr: (37.0697464s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-116700 cp testdata\cp-test.txt multinode-116700:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-116700 cp testdata\cp-test.txt multinode-116700:/home/docker/cp-test.txt: (9.8866124s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-116700 ssh -n multinode-116700 "sudo cat /home/docker/cp-test.txt"
E0807 19:48:20.535136    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-116700 ssh -n multinode-116700 "sudo cat /home/docker/cp-test.txt": (9.7200464s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-116700 cp multinode-116700:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile3663502109\001\cp-test_multinode-116700.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-116700 cp multinode-116700:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile3663502109\001\cp-test_multinode-116700.txt: (9.7567059s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-116700 ssh -n multinode-116700 "sudo cat /home/docker/cp-test.txt"
E0807 19:48:38.160028    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-100700\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-116700 ssh -n multinode-116700 "sudo cat /home/docker/cp-test.txt": (9.7639629s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-116700 cp multinode-116700:/home/docker/cp-test.txt multinode-116700-m02:/home/docker/cp-test_multinode-116700_multinode-116700-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-116700 cp multinode-116700:/home/docker/cp-test.txt multinode-116700-m02:/home/docker/cp-test_multinode-116700_multinode-116700-m02.txt: (17.1323555s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-116700 ssh -n multinode-116700 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-116700 ssh -n multinode-116700 "sudo cat /home/docker/cp-test.txt": (9.7996954s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-116700 ssh -n multinode-116700-m02 "sudo cat /home/docker/cp-test_multinode-116700_multinode-116700-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-116700 ssh -n multinode-116700-m02 "sudo cat /home/docker/cp-test_multinode-116700_multinode-116700-m02.txt": (9.9286783s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-116700 cp multinode-116700:/home/docker/cp-test.txt multinode-116700-m03:/home/docker/cp-test_multinode-116700_multinode-116700-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-116700 cp multinode-116700:/home/docker/cp-test.txt multinode-116700-m03:/home/docker/cp-test_multinode-116700_multinode-116700-m03.txt: (17.5019032s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-116700 ssh -n multinode-116700 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-116700 ssh -n multinode-116700 "sudo cat /home/docker/cp-test.txt": (9.8521108s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-116700 ssh -n multinode-116700-m03 "sudo cat /home/docker/cp-test_multinode-116700_multinode-116700-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-116700 ssh -n multinode-116700-m03 "sudo cat /home/docker/cp-test_multinode-116700_multinode-116700-m03.txt": (9.866277s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-116700 cp testdata\cp-test.txt multinode-116700-m02:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-116700 cp testdata\cp-test.txt multinode-116700-m02:/home/docker/cp-test.txt: (9.8632039s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-116700 ssh -n multinode-116700-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-116700 ssh -n multinode-116700-m02 "sudo cat /home/docker/cp-test.txt": (10.1263097s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-116700 cp multinode-116700-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile3663502109\001\cp-test_multinode-116700-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-116700 cp multinode-116700-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile3663502109\001\cp-test_multinode-116700-m02.txt: (10.2345865s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-116700 ssh -n multinode-116700-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-116700 ssh -n multinode-116700-m02 "sudo cat /home/docker/cp-test.txt": (10.2334286s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-116700 cp multinode-116700-m02:/home/docker/cp-test.txt multinode-116700:/home/docker/cp-test_multinode-116700-m02_multinode-116700.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-116700 cp multinode-116700-m02:/home/docker/cp-test.txt multinode-116700:/home/docker/cp-test_multinode-116700-m02_multinode-116700.txt: (17.6904332s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-116700 ssh -n multinode-116700-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-116700 ssh -n multinode-116700-m02 "sudo cat /home/docker/cp-test.txt": (9.9197476s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-116700 ssh -n multinode-116700 "sudo cat /home/docker/cp-test_multinode-116700-m02_multinode-116700.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-116700 ssh -n multinode-116700 "sudo cat /home/docker/cp-test_multinode-116700-m02_multinode-116700.txt": (9.8703723s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-116700 cp multinode-116700-m02:/home/docker/cp-test.txt multinode-116700-m03:/home/docker/cp-test_multinode-116700-m02_multinode-116700-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-116700 cp multinode-116700-m02:/home/docker/cp-test.txt multinode-116700-m03:/home/docker/cp-test_multinode-116700-m02_multinode-116700-m03.txt: (17.4042714s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-116700 ssh -n multinode-116700-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-116700 ssh -n multinode-116700-m02 "sudo cat /home/docker/cp-test.txt": (10.1420153s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-116700 ssh -n multinode-116700-m03 "sudo cat /home/docker/cp-test_multinode-116700-m02_multinode-116700-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-116700 ssh -n multinode-116700-m03 "sudo cat /home/docker/cp-test_multinode-116700-m02_multinode-116700-m03.txt": (9.9168357s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-116700 cp testdata\cp-test.txt multinode-116700-m03:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-116700 cp testdata\cp-test.txt multinode-116700-m03:/home/docker/cp-test.txt: (9.898639s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-116700 ssh -n multinode-116700-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-116700 ssh -n multinode-116700-m03 "sudo cat /home/docker/cp-test.txt": (9.8946367s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-116700 cp multinode-116700-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile3663502109\001\cp-test_multinode-116700-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-116700 cp multinode-116700-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile3663502109\001\cp-test_multinode-116700-m03.txt: (9.9266444s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-116700 ssh -n multinode-116700-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-116700 ssh -n multinode-116700-m03 "sudo cat /home/docker/cp-test.txt": (9.8478245s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-116700 cp multinode-116700-m03:/home/docker/cp-test.txt multinode-116700:/home/docker/cp-test_multinode-116700-m03_multinode-116700.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-116700 cp multinode-116700-m03:/home/docker/cp-test.txt multinode-116700:/home/docker/cp-test_multinode-116700-m03_multinode-116700.txt: (17.2120018s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-116700 ssh -n multinode-116700-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-116700 ssh -n multinode-116700-m03 "sudo cat /home/docker/cp-test.txt": (9.8033289s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-116700 ssh -n multinode-116700 "sudo cat /home/docker/cp-test_multinode-116700-m03_multinode-116700.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-116700 ssh -n multinode-116700 "sudo cat /home/docker/cp-test_multinode-116700-m03_multinode-116700.txt": (9.8810025s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-116700 cp multinode-116700-m03:/home/docker/cp-test.txt multinode-116700-m02:/home/docker/cp-test_multinode-116700-m03_multinode-116700-m02.txt
E0807 19:53:20.540972    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\client.crt: The system cannot find the path specified.
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-116700 cp multinode-116700-m03:/home/docker/cp-test.txt multinode-116700-m02:/home/docker/cp-test_multinode-116700-m03_multinode-116700-m02.txt: (17.4155699s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-116700 ssh -n multinode-116700-m03 "sudo cat /home/docker/cp-test.txt"
E0807 19:53:38.172027    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-100700\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-116700 ssh -n multinode-116700-m03 "sudo cat /home/docker/cp-test.txt": (9.9062948s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-116700 ssh -n multinode-116700-m02 "sudo cat /home/docker/cp-test_multinode-116700-m03_multinode-116700-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-116700 ssh -n multinode-116700-m02 "sudo cat /home/docker/cp-test_multinode-116700-m03_multinode-116700-m02.txt": (9.8690905s)
--- PASS: TestMultiNode/serial/CopyFile (379.36s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (78.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-116700 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-116700 node stop m03: (24.6544403s)
multinode_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-116700 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-116700 status: exit status 7 (27.0446403s)

                                                
                                                
-- stdout --
	multinode-116700
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-116700-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-116700-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0807 19:54:15.641498    1816 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
multinode_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-116700 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-116700 status --alsologtostderr: exit status 7 (27.2911379s)

                                                
                                                
-- stdout --
	multinode-116700
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-116700-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-116700-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0807 19:54:42.680201   12888 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0807 19:54:42.763774   12888 out.go:291] Setting OutFile to fd 1668 ...
	I0807 19:54:42.764802   12888 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 19:54:42.764802   12888 out.go:304] Setting ErrFile to fd 1716...
	I0807 19:54:42.764802   12888 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 19:54:42.781602   12888 out.go:298] Setting JSON to false
	I0807 19:54:42.781602   12888 mustload.go:65] Loading cluster: multinode-116700
	I0807 19:54:42.781602   12888 notify.go:220] Checking for updates...
	I0807 19:54:42.782348   12888 config.go:182] Loaded profile config "multinode-116700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 19:54:42.782348   12888 status.go:255] checking status of multinode-116700 ...
	I0807 19:54:42.783076   12888 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700 ).state
	I0807 19:54:45.098232   12888 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 19:54:45.098232   12888 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:54:45.098232   12888 status.go:330] multinode-116700 host status = "Running" (err=<nil>)
	I0807 19:54:45.098232   12888 host.go:66] Checking if "multinode-116700" exists ...
	I0807 19:54:45.099210   12888 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700 ).state
	I0807 19:54:47.358468   12888 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 19:54:47.358468   12888 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:54:47.358468   12888 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700 ).networkadapters[0]).ipaddresses[0]
	I0807 19:54:50.033270   12888 main.go:141] libmachine: [stdout =====>] : 172.28.224.86
	
	I0807 19:54:50.033270   12888 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:54:50.033270   12888 host.go:66] Checking if "multinode-116700" exists ...
	I0807 19:54:50.050044   12888 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0807 19:54:50.050044   12888 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700 ).state
	I0807 19:54:52.297135   12888 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 19:54:52.297684   12888 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:54:52.297835   12888 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700 ).networkadapters[0]).ipaddresses[0]
	I0807 19:54:54.961242   12888 main.go:141] libmachine: [stdout =====>] : 172.28.224.86
	
	I0807 19:54:54.961242   12888 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:54:54.962233   12888 sshutil.go:53] new ssh client: &{IP:172.28.224.86 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-116700\id_rsa Username:docker}
	I0807 19:54:55.070517   12888 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (5.0203391s)
	I0807 19:54:55.085681   12888 ssh_runner.go:195] Run: systemctl --version
	I0807 19:54:55.108316   12888 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0807 19:54:55.132292   12888 kubeconfig.go:125] found "multinode-116700" server: "https://172.28.224.86:8443"
	I0807 19:54:55.132292   12888 api_server.go:166] Checking apiserver status ...
	I0807 19:54:55.144334   12888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0807 19:54:55.184457   12888 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2151/cgroup
	W0807 19:54:55.202464   12888 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2151/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0807 19:54:55.212458   12888 ssh_runner.go:195] Run: ls
	I0807 19:54:55.220766   12888 api_server.go:253] Checking apiserver healthz at https://172.28.224.86:8443/healthz ...
	I0807 19:54:55.228732   12888 api_server.go:279] https://172.28.224.86:8443/healthz returned 200:
	ok
	I0807 19:54:55.228732   12888 status.go:422] multinode-116700 apiserver status = Running (err=<nil>)
	I0807 19:54:55.228732   12888 status.go:257] multinode-116700 status: &{Name:multinode-116700 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0807 19:54:55.228732   12888 status.go:255] checking status of multinode-116700-m02 ...
	I0807 19:54:55.229163   12888 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700-m02 ).state
	I0807 19:54:57.450154   12888 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 19:54:57.450434   12888 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:54:57.450434   12888 status.go:330] multinode-116700-m02 host status = "Running" (err=<nil>)
	I0807 19:54:57.450554   12888 host.go:66] Checking if "multinode-116700-m02" exists ...
	I0807 19:54:57.451182   12888 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700-m02 ).state
	I0807 19:54:59.714743   12888 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 19:54:59.715005   12888 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:54:59.715005   12888 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 19:55:02.432587   12888 main.go:141] libmachine: [stdout =====>] : 172.28.226.55
	
	I0807 19:55:02.432587   12888 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:55:02.432942   12888 host.go:66] Checking if "multinode-116700-m02" exists ...
	I0807 19:55:02.447659   12888 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0807 19:55:02.447659   12888 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700-m02 ).state
	I0807 19:55:04.754680   12888 main.go:141] libmachine: [stdout =====>] : Running
	
	I0807 19:55:04.754680   12888 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:55:04.755693   12888 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-116700-m02 ).networkadapters[0]).ipaddresses[0]
	I0807 19:55:07.419864   12888 main.go:141] libmachine: [stdout =====>] : 172.28.226.55
	
	I0807 19:55:07.420868   12888 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:55:07.421003   12888 sshutil.go:53] new ssh client: &{IP:172.28.226.55 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-116700-m02\id_rsa Username:docker}
	I0807 19:55:07.524776   12888 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (5.0770517s)
	I0807 19:55:07.537695   12888 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0807 19:55:07.563144   12888 status.go:257] multinode-116700-m02 status: &{Name:multinode-116700-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0807 19:55:07.563144   12888 status.go:255] checking status of multinode-116700-m03 ...
	I0807 19:55:07.563940   12888 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-116700-m03 ).state
	I0807 19:55:09.832863   12888 main.go:141] libmachine: [stdout =====>] : Off
	
	I0807 19:55:09.832863   12888 main.go:141] libmachine: [stderr =====>] : 
	I0807 19:55:09.832863   12888 status.go:330] multinode-116700-m03 host status = "Stopped" (err=<nil>)
	I0807 19:55:09.832863   12888 status.go:343] host is not running, skipping remaining checks
	I0807 19:55:09.832863   12888 status.go:257] multinode-116700-m03 status: &{Name:multinode-116700-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (78.99s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (200s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-116700 node start m03 -v=7 --alsologtostderr
E0807 19:56:41.432006    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-100700\client.crt: The system cannot find the path specified.
multinode_test.go:282: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-116700 node start m03 -v=7 --alsologtostderr: (2m42.631817s)
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-116700 status -v=7 --alsologtostderr
E0807 19:58:20.546592    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\client.crt: The system cannot find the path specified.
multinode_test.go:290: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-116700 status -v=7 --alsologtostderr: (37.1825826s)
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (200.00s)

                                                
                                    
x
+
TestPreload (546.81s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-356100 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4
E0807 20:08:20.555786    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\client.crt: The system cannot find the path specified.
E0807 20:08:38.187901    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-100700\client.crt: The system cannot find the path specified.
preload_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-356100 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4: (4m41.8044091s)
preload_test.go:52: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-356100 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-356100 image pull gcr.io/k8s-minikube/busybox: (8.916058s)
preload_test.go:58: (dbg) Run:  out/minikube-windows-amd64.exe stop -p test-preload-356100
preload_test.go:58: (dbg) Done: out/minikube-windows-amd64.exe stop -p test-preload-356100: (40.8159757s)
preload_test.go:66: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-356100 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv
E0807 20:13:20.556774    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\client.crt: The system cannot find the path specified.
E0807 20:13:21.447136    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-100700\client.crt: The system cannot find the path specified.
E0807 20:13:38.181859    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-100700\client.crt: The system cannot find the path specified.
preload_test.go:66: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-356100 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv: (2m43.9380354s)
preload_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-356100 image list
preload_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-356100 image list: (7.7077059s)
helpers_test.go:175: Cleaning up "test-preload-356100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-356100
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-356100: (43.625349s)
--- PASS: TestPreload (546.81s)

                                                
                                    
x
+
TestScheduledStopWindows (340.73s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-853300 --memory=2048 --driver=hyperv
E0807 20:18:03.798217    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\client.crt: The system cannot find the path specified.
E0807 20:18:20.560322    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-463600\client.crt: The system cannot find the path specified.
E0807 20:18:38.188024    9660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-100700\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-853300 --memory=2048 --driver=hyperv: (3m25.8507796s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-853300 --schedule 5m
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-853300 --schedule 5m: (11.1973639s)
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-853300 -n scheduled-stop-853300
scheduled_stop_test.go:191: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-853300 -n scheduled-stop-853300: exit status 1 (10.020057s)

                                                
                                                
** stderr ** 
	W0807 20:19:52.171436    5016 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:191: status error: exit status 1 (may be ok)
scheduled_stop_test.go:54: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-853300 -- sudo systemctl show minikube-scheduled-stop --no-page
scheduled_stop_test.go:54: (dbg) Done: out/minikube-windows-amd64.exe ssh -p scheduled-stop-853300 -- sudo systemctl show minikube-scheduled-stop --no-page: (9.9815468s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-853300 --schedule 5s
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-853300 --schedule 5s: (10.9343825s)
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-853300
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-853300: exit status 7 (2.5462474s)

                                                
                                                
-- stdout --
	scheduled-stop-853300
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0807 20:21:23.114882    9536 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-853300 -n scheduled-stop-853300
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-853300 -n scheduled-stop-853300: exit status 7 (2.4987973s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0807 20:21:25.658222   13644 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-853300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-853300
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-853300: (27.6870985s)
--- PASS: TestScheduledStopWindows (340.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-662200 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-662200 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv: exit status 14 (437.166ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-662200] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4717 Build 19045.4717
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19389
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0807 20:21:55.882333    4856 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.44s)

                                                
                                    

Test skip (31/197)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false windows amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-100700 --alsologtostderr -v=1]
functional_test.go:912: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-100700 --alsologtostderr -v=1] ...
helpers_test.go:502: unable to terminate pid 1596: Access is denied.
--- SKIP: TestFunctional/parallel/DashboardCmd (8.38s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (5.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-100700 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:970: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-100700 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.0415113s)

                                                
                                                
-- stdout --
	* [functional-100700] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4717 Build 19045.4717
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19389
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	W0807 18:24:24.478078    1512 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0807 18:24:24.553465    1512 out.go:291] Setting OutFile to fd 1380 ...
	I0807 18:24:24.554197    1512 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 18:24:24.554197    1512 out.go:304] Setting ErrFile to fd 1192...
	I0807 18:24:24.554197    1512 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 18:24:24.578349    1512 out.go:298] Setting JSON to false
	I0807 18:24:24.581343    1512 start.go:129] hostinfo: {"hostname":"minikube6","uptime":316994,"bootTime":1722738070,"procs":196,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4717 Build 19045.4717","kernelVersion":"10.0.19045.4717 Build 19045.4717","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0807 18:24:24.581343    1512 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0807 18:24:24.585829    1512 out.go:177] * [functional-100700] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4717 Build 19045.4717
	I0807 18:24:24.588564    1512 notify.go:220] Checking for updates...
	I0807 18:24:24.588917    1512 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0807 18:24:24.591775    1512 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0807 18:24:24.594388    1512 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0807 18:24:24.596578    1512 out.go:177]   - MINIKUBE_LOCATION=19389
	I0807 18:24:24.600850    1512 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0807 18:24:24.604348    1512 config.go:182] Loaded profile config "functional-100700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 18:24:24.605125    1512 driver.go:392] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:976: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/DryRun (5.04s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (5.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-100700 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-100700 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.0369202s)

                                                
                                                
-- stdout --
	* [functional-100700] minikube v1.33.1 sur Microsoft Windows 10 Enterprise N 10.0.19045.4717 Build 19045.4717
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19389
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	W0807 18:23:34.821286    5948 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0807 18:23:34.900170    5948 out.go:291] Setting OutFile to fd 1068 ...
	I0807 18:23:34.900737    5948 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 18:23:34.900737    5948 out.go:304] Setting ErrFile to fd 1164...
	I0807 18:23:34.900737    5948 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 18:23:34.924601    5948 out.go:298] Setting JSON to false
	I0807 18:23:34.928346    5948 start.go:129] hostinfo: {"hostname":"minikube6","uptime":316944,"bootTime":1722738070,"procs":194,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4717 Build 19045.4717","kernelVersion":"10.0.19045.4717 Build 19045.4717","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0807 18:23:34.928346    5948 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0807 18:23:34.933462    5948 out.go:177] * [functional-100700] minikube v1.33.1 sur Microsoft Windows 10 Enterprise N 10.0.19045.4717 Build 19045.4717
	I0807 18:23:34.936447    5948 notify.go:220] Checking for updates...
	I0807 18:23:34.939390    5948 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0807 18:23:34.942404    5948 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0807 18:23:34.945106    5948 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0807 18:23:34.947653    5948 out.go:177]   - MINIKUBE_LOCATION=19389
	I0807 18:23:34.951081    5948 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0807 18:23:34.955649    5948 config.go:182] Loaded profile config "functional-100700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0807 18:23:34.957236    5948 driver.go:392] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:1021: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/InternationalLanguage (5.04s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:57: skipping: mount broken on hyperv: https://github.com/kubernetes/minikube/issues/5029
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:258: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard